text
stringlengths
1.36k
1.27M
Comparative study of heat transfer and pressure drop during flow boiling and flow condensation in minichannels DARIUSZ MIKIELEWICZ\textsuperscript{1*} RAFAŁ ANDRZEJCZYK\textsuperscript{1} BLANKA JAKUBOWSKA\textsuperscript{1} JAROSŁAW MIKIELEWICZ\textsuperscript{2} \textsuperscript{1} Gdańsk University of Technology, Faculty of Mechanical Engineering, Narutowicza 11/12, 80-233 Gdańsk, Poland \textsuperscript{2} The Szewalski Institute of Fluid-Flow Machinery Polish Academy of Sciences, Fiszera 14, 80-231 Gdańsk, Poland Abstract In the paper a method developed earlier by authors is applied to calculations of pressure drop and heat transfer coefficient for flow boiling and also flow condensation for some recent data collected from literature for such fluids as R404a, R600a, R290, R32, R134a, R1234yf and other. The modification of interface shear stresses between flow boiling and flow condensation in annular flow structure are considered through incorporation of the so called blowing parameter. The shear stress between vapor phase and liquid phase is generally a function of nonisothermal effects. The mechanism of modification of shear stresses at the vapor-liquid interface has been presented in detail. In case of annular flow it contributes to thickening and thinning of the liquid film, which corresponds to condensation and boiling respectively. There is also a different influence of heat flux on the modification of shear stress in the bubbly flow structure, where it affects bubble nucleation. In that case the effect of applied heat flux is considered. As a result a modified form of the two-phase flow multiplier is obtained, in which the nonadiabatic effect is clearly pronounced. Keywords: Two-phase pressure drops; Heat transfer coefficient; Boiling; Condensation *Corresponding Author. E-mail: firstname.lastname@example.org Nomenclature | Symbol | Description | |--------|-------------| | $A$ | cross section area, m$^2$ | | $B$ | blowing parameter | | Bo | boiling number, $B_0 = \frac{\rho}{\sigma u_{h,G}}$ | | $C$ | mass concentration of droplets in two phase core | | $C_f$ | friction factor | | Con | confinement number | | $c_p$ | specific heat, J/kg K | | $d$ | diameter, m | | $D$ | deposition term, kg/ms; channel inner diameter, m | | $E$ | entrainment term, energy dissipation, kg/ms | | $G$ | mass flux, kg/m$^2$s | | $g$ | gravitational acceleration, m/s$^2$ | | $h$ | enthalpy, J/kg | | $h_{LG}$ | specific enthalpy of vaporization, J/kg | | $h_{Lv}$ | specific enthalpy of vaporization, J/kg | | $m_G$ | mass of vapour phase | | $m_L$ | mass of liquid phase | | $\dot{m}$ | mass flux, kg/s | | $P$ | perimeter, m | | $p$ | pressure, Pa | | Pr | Prandtl number | | $q$ | density of heat flux, W/m$^2$ | | $\dot{q}_w$ | wall heat flux, W/m$^2$ | | Re | Reynolds number, $\text{Re} = \frac{Cd}{\mu}$ | | $\text{Re}_L$ | Reynolds number liquid film only $\text{Re}_L = \frac{Cd(1-x)}{\mu_L}$ | | $s$ | slip ratio | | $u$ | velocity, m/s | | $u^+$ | $u/u_h$ reduced speed | | $w$ | velocity, m/s | | $v_0$ | transverse velocity, m/s | | $x$ | quality, $\frac{m_G}{m_G+m_L}$ | | $z$ | longitudinal coordinate, m | Greek symbols | Symbol | Description | |--------|-------------| | $\alpha$ | heat transfer coefficient, W/m$^2$K | | $\delta$ | film thickness, m | | $\sigma$ | surface tension, N/m | | $\lambda$ | thermal conductivity, W/mK | | $\rho$ | liquid density, kg/m$^3$ | | $\mu$ | dynamic viscosity, Pa s | | $\xi = \frac{C_f}{4}$ | friction factor | | $\tau$ | shear stress, N/m$^2$ | | $\varphi$ | void fraction | | $\Phi^2$ | two phase multiplier | Subscripts $c$ – core, cross section $cv$ – vapour core $cd$ – droplets core $e$ – entrainment $f$ – film $g$ – vapor $G$ – vapour $i$ – internal $lv$ – mixture of liquid and vapor $L$ – liquid $PB$ – pool boiling $sat$ – saturation $TP$ – two-phase flow $TPB$ – two-phase boiling $TPC$ – two-phase condensation $w$ – wall $v$ – vapor $\sigma$ – for no evaporation of the liquid film $\infty$ – undisturbed $0$ – referencing case 1 Introduction Generally, the nonadiabatic effects modify the friction pressure drop term and subsequently the heat transfer coefficient. That is the reason why it is impossible to use reciprocally existing models for calculations of heat transfer and pressure drop in flow boiling and flow condensation cases. In authors opinion the way to solve that is to incorporate appropriate mechanisms into the friction pressure drop term responsible for modification of shear stresses at the vapor-liquid interface, different for annular flow structure and different for other ones, generally considered here as bubbly flows. Postulated in the paper suggestion of considering the so called ‘blowing parameter’ in annular flow explains partially the mechanism of liquid film thickening in case of flow condensation and thinning in case of flow boiling. In other flow structures, for example the bubbly flow, there can also be identified other effects, which have yet to attract sufficient attention in literature. One of such effects is the fact that the two-phase pressure drop is modeled in the way that the influence of applied heat flux is not considered. The objective of this paper is to present the capability of the flow boiling model, developed earlier, Mikielewicz [1] with subsequent modifications, Mikielewicz et al. [2], Mikielewicz [3], to model also flow condensation inside tubes with account of nonadiabatic effects. In such case the heat transfer coefficient is a function of the two-phase pressure drop. Therefore some experimental data have been collected from literature to further validate that method for the case of other fluids. The literature data considered in the paper for relevant comparisons, in case of flow condensation, are due to Bohdal et al. [4], Cavallini et al. [5], Matkovic et al. [6], and in case of flow boiling, due to Lu et al. [7], Wang et al. [8]. The results of pressure drop calculations have been compared with some correlations from literature for minichannels, namely due to Mishima and Hibiki [10], Zhang and Webb [11] and a modified version of Muller-Steinhagen and Heck [12] model, Mikielewicz et al. [2]. Calculations have been also compared against some well established methods for calculation of heat transfer coefficient for condensation due to Cavallini et al. [5] and Thome et al. [9] and flow boiling. 2 Dissipation based two-phase pressure drop model Flow resistance due to friction is greater than in case of single phase flow with the same flow rate. The two-phase flow multiplier is defined as a ratio of pressure drop in two-phase flow, \((dp/dz)_{TP}\), to the total pressure drop in the flow with either liquid or vapor, \((dp/dz)\), present \[ \Phi^2 = \left(\frac{dp}{dz}\right)_{TP} \left(\frac{dp}{dz}\right)^{-1}. \] Unfortunately, the correlations developed for conventional size tubes cannot be used in calculations of pressure drop in minichannels. In case of small diameter channels there are other correlations advised for use. Their major modification is the inclusion of the surface tension effect into existing conventional size tube correlations. Amongst the most acknowledged ones are those due to Mishima and Hibiki [10], Tran et al. [13] and Zhang and Webb [11]. The pressure drop model for two-phase flow condensation or flow boiling is developed on the basis of dissipation energy analysis, which is a fundamental hypothesis in the model under scrutiny here. The dissipation in two-phase flow can be modeled as a sum of two contributions, namely the energy dissipation due to shearing flow without the bubbles, \(E_{TP}\), and dissipation resulting from the bubble generation, \(E_{PB}\), Mikielewicz [1] \[ E_{TPB} = E_{TP} + E_{PB}. \] Dissipation energy, $E_{TP}$, is expressed as power lost in the control volume. The term power refers to compensation of two-phase flow friction losses and is expressed through a product of shear stress and flow velocity. Analogically can be expressed the energy dissipation due to bubble generation in the two-phase flow. A geometrical relation between the friction factor in two-phase flow is obtained which forms a geometrical sum of two contributions, namely the friction factor due to the shearing flow without bubbles and the friction factor due to generation of bubbles, in the form $$\xi^2_{TPB} = \xi^2_{TP} + \xi^2_{PB}. \tag{3}$$ In the considered case $\xi_{PB}$ is prone to be dependent on the applied wall heat flux. As can be seen from (3) the friction factors in two phase flow are summed up in a geometrical manner. The first term on the right hand side of (3) can be determined from the definition of the two-phase flow multiplier (1). Pressure drop in the two-phase flow without bubble generation can also be considered as a pressure drop in the equivalent flow of a fluid with velocity $w_{TP}$. The pressure drop of the liquid flowing alone can be determined from a corresponding single phase flow relation. In case of turbulent flow we will use the Blasius equation for determination of the friction factor, whereas in case of laminar flow the friction factor can be evaluated from the corresponding expression valid in the laminar regime. A critical difference of the method in comparison to other authors models is incorporation of the two-phase flow multiplier into modeling (1). There are specific effects related to the shear stress modifications, named here the nonadiabatic effects, which will be described below. One of the effects is pertinent to annular flows, whereas the other one to the bubbly flow. ### 2.1 Nonadiabatic effects in annular flow The shear stress between vapor phase and liquid phase is generally a function of nonadiabatic effects. That is a major reason why up to date approaches, considering the issue of flow boiling and flow condensation as symmetric phenomena, are failing in that respect. The way forward is to incorporate a mechanism into the convective term responsible for modification of shear stresses at the vapor-liquid interface. The relationship describing the shear stress between liquid and vapor phase in annular flow can be modified by incorporation of the so called ‘blowing parameter’, $B$, which contributes to the liquid film thickening in case of flow condensation and thinning in case of flow boiling. Such idea stems from the earlier work on the topic of the ‘boundary layer intensification’ by introduction of air transversely into the boundary layer as presented in Fig. 1, Mikielewicz [14]. Considered was a turbulent flow of incompressible fluid without pressure gradient over the interface between liquid and vapour, with the presence of transverse mass flux. On the basis of analysis of the continuity of mass and momentum equations derived has been the expression for the modification of shear stress in the boundary layer, which reads \[ \tau^+ = 1 + \frac{B}{\tau_0^+} u^+ . \] (4) ![Figure 1: Injection of air into the boundary layer.](image) In (4) \(v_0\) denotes the transverse velocity, \(u^+ = u/u_h\), \(\tau^+ = \tau/\tau_w\), \(\tau_0^+ = \tau_w/\tau_{w0}\), where \(\tau_{w0}\) is the wall shear stress in case where the air is not injected into the boundary layer, and \(B = 2v_0/(C_f u_\infty)\) is the so called ‘blowing parameter’. Using that idea it has been decided that the mechanism of liquid film thinning or thickening close to the wall can be modeled similarly. A possible confirmation of that comes from the works by Kutateladze and Leontiev [15] and Wallis [16], who studied the effect of shear stress modifications in flow boiling in vertical channels, and who developed the expressions linking the shear stress at the wall and the blowing parameter. The relation due to Kutateladze and Leontiev reads \[ \tau_0^+ = \left(1 - \frac{B}{4}\right)^2 . \] (5) On the other hand, in case of small values of \(B\) the relation given by Eq. (5) reduces to that recommended by Wallis \[ \tau_0^+ = \left(1 - \frac{B}{2}\right). \] (6) The expression (4) actually reduces to the expression (6) for the values of Reynolds number tending to infinity, encouraging to define the transverse velocity to be equal to \(v_0 = \dot{q}_w/(h_{lv} \rho_l)\) in case of condensation or boiling. In case of small values of the blowing parameter \(B\) the relation (4) reduces to the form: \[ \tau_0^+ = \left(1 \pm \frac{B}{2}\right). \] (7) The blowing parameter is hence defined as \[ B = \frac{2v_0}{C_f u_\infty} = \frac{2\dot{q}}{C_{f0}(u_G - u_L) h_{LG} \rho_G} = \frac{2q \frac{\rho_L}{\rho_G}}{C_{f0} G(s-1) h_{LG}}. \] (8) In the present paper another new approach to determine the blowing parameter \(B\) in function of vapor quality is presented. ### 2.2 Model of blowing parameter Analysis of the liquid and vapor phase is based on examination of mass and momentum balance equations with respect to the non-adiabatic effect influence. Figure 2 shows the considered schematic of the annular flow model. The analysis will be conducted with the reference to condensation. In the model presented below the following notation is used. The liquid film cross-section area on the wall is expressed by \(A_f = \pi d \delta\), while the core cross-section area as \(A_c = \pi(d - 2\delta)^2/4\). The wetted perimeter is given by the relation \(P_f = \pi d\), where \(d\) is the channel inner diameter. The mean liquid film velocity is given as \(u_f = \dot{m}/(\rho_f A_f)\). Authors assumed that the interfacial velocity can be determined from the relationship \(u_i = 2u_f\). #### 2.2.1 Mass balance in liquid film and core Liquid film: \[ \frac{d\dot{m}_f}{dz} = -\Gamma_{lv} + D - E. \] (9) Two-phase core: \[ \frac{d\dot{m}_{cd}}{dz} = -D + E. \] (10) Vapor in the two-phase vapor core: \[ \frac{d\dot{m}_{cv}}{dz} = -\Gamma_{lv}. \] (11) In (9) and (10) the terms $D$ and $E$ denote deposition and entrainment in the annular flow. The remaining term in equation, $\Gamma_{lv} = \dot{q}_w P / h_{lv}$, is responsible for the condensation of vapor. Concentration of droplets in the core is defined as a ratio of mass flow rate droplets in the core to the sum of mass flow rate vapor and entrained liquid droplets from the flow: \[ C = \frac{\dot{m}_{ef}}{\dot{m}_{cv} v_g + \dot{m}_{ef} v_f}. \] (12) The combined mass flow rate of the core results from combination of (10) and (11): \[ \frac{d\dot{m}_c}{dz} = -\Gamma_{lv} - D + E. \] (13) The amount of entrained droplets in (12) can be determined from the mass balance: \[ \dot{m}_{ef} = \dot{m} - \dot{m}_f - \dot{m}_{cv}. \] (14) ### 2.2.2 Momentum balance in liquid film and two-phase core The change of momentum is mainly due to the mass exchange between the core of flow and liquid film (evaporation, droplet deposition or entrainment). Acceleration is neglected. The flow schematic is shown in Fig. 3. ### 2.2.3 Momentum equation for liquid film Momentum equation for the liquid film reads: \[ -\frac{dp_L}{dz} dz (\delta - y) P_f - \tau P_f dz + \tau_i P_f dz = (\Gamma_{lv} u_i + Du_c - Eu_i) dz. \] (15) Pressure gradient in the liquid film is therefore (assuming that $\rho_f = \rho_l$ and $\mu_f = \mu_l$) $$-\left(\frac{dp_l}{dz}\right) = \frac{3\mu_f \dot{m}_f}{P_f \rho_f \delta^3} - \frac{3\tau_i}{2\delta} + \frac{3 (\Gamma_{lv} u_i + Du_f - Eu_i)}{2\delta P_f}. \tag{16}$$ ### 2.2.4 The momentum balance for the core flow Control volume for the two-phase core is shown in Fig. 4. Momentum equation for the mixture in the core is given by equation: $$\rho_{TP} u_c^2 A_c + \frac{d}{dz} \left( \rho_{TP} u_c^2 A_c \right) dz - \rho_{TP} u_c^2 A_c + [-\Gamma_{lv} u_i - Du_c + Eu_i] dz$$ $$= p_v A_c - \left[ p_v A_c + \frac{d(p_v A_c)}{dz} \Delta z \right] - \tau_i P dz. \tag{17}$$ From Eq. (17) it follows that interfacial shear stress are: $$\tau_i = \frac{1}{P} \left[ A_c \left( -\frac{dp_v}{dz} \right) - p_v \frac{dA_c}{dz} \right] - \frac{1}{P} \frac{d}{dz} \left( \rho_{TP} u_c^2 A_c \right) - \frac{1}{P} (-\Gamma_{lv} u_i - Du_c + Eu_i) \tag{18}$$ The relationship expresses the interfacial shear stress for the two-phase flow (here condensation), and included are the non-adiabatic effects: liquid film condensation, droplets deposition and entrainment. When there is no evaporation of the liquid film, but entrainment and deposition are, the interfacial shear stress distribution takes the form $$\tau_{io} = \frac{-\frac{1}{A_c} (-Du_c + Eu_i) + \frac{3\mu_f \dot{m}_f}{P_f \rho_f \delta^3} + \frac{3}{2\delta P_f} (-Du_f + Eu_i)}{\frac{P_f}{A_c} + \frac{3}{2\delta}}. \tag{19}$$ In case one could neglect the entrainment and deposition, i.e., by assigning $E = 0$ and $D = 0$, were obtained a very simplified form of the diabatic two-phase flow effect in the form $$\frac{\tau_i}{\tau_{io}} = 1 + \frac{2q_w\delta \left( \frac{4\delta}{D} + \frac{3}{2} \right)}{3\mu_f h_{lv}} = (1 + B). \quad (20)$$ Figures 5 and 6 present the results of sample calculations of the blowing parameter for boiling of R290 at parameters: $G = 74 \text{ kg/m}^2\text{s}$, $T_{sat} = -1.9^\circ\text{C}$ in a 2.6 mm tube, and for R600a: $G = 440 \text{ kg/(m}^2\text{s)}$, $T_{sat} = 22^\circ\text{C}$ in a 2.6 mm tube. When the parameter is calculated by Eq. (8) then $B = 0.133$ for R290 and $B = 0.023$ for R600a. The result from application of Eq. (19) is $B = 0.095$, and 0.025, respectively. This shows satisfactory consistency of calculations. ### 2.3 Nonadiabatic effects in other than annular flow In case of the nonadiabatic effects in other than annular structures authors presented their idea in Mikielewicz [3]. The two-phase flow multiplier, which incorporates the non-adiabatic effect, resulting from (3), reads: $$\Phi_{TPB}^2 = \frac{\xi_{TPB}}{\xi_0} = \sqrt{\Phi^2 + \frac{\xi_{PB}^2}{\xi_0^2}} = \Phi^2 \sqrt{1 + \frac{\left( \frac{8\alpha_f \rho_f d}{\lambda \text{RePr}} \right)^2}{\xi_0^2 \Phi^2}}. \quad (21)$$ The two-phase flow multiplier presented by the above equation reduces to adiabatic formulation in case when the applied wall heat flux is tending to zero. Generalizing the obtained above results it can be said that the two-phase flow multiplier inclusive of non-adiabatic effects can be calculated, depending upon the particular flow case and the flow structure in the following way: \[ \Phi_{TPC}^2 = \Phi_{TPB}^2 = \frac{\xi_{TPB}}{\xi_0} = \begin{cases} \Phi^2 \left(1 \pm \frac{B}{D}\right) & \text{for annular structure, condensation and boiling} \\ \Phi^2 \sqrt{1 + \left(\frac{8\alpha_{PB}D}{3\text{RePr}\xi_0\Phi^2}\right)^2} & \text{for other flow structures} \end{cases} \] (22) In (21) there is no specification on which two-phase flow multiplier model should be applied. That issue is dependent upon the type of considered fluid. The effect of incorporation of the blowing parameter into pressure drop predictions is shown in Figs. 6–8. In the presented case the effect of considering the blowing parameter may reach even 20% effect. The authors own correlation is shows best compatibility with the experimental data. In the case of pressure drops the good agreement with experimental data shows also Mishima and Hibiki et al. [10] correlation and relatively good correctness shows Tran et al. relationship [13]. Figure 6: Condensation pressure drop in function of quality, Bohdal et al. [4], R134a: a) $G = 361 \text{ kg/m}^2\text{s}$, $T_{sat} = 45^\circ\text{C}$, $d = 1.4$ mm; b) $G = 722 \text{ kg/m}^2\text{s}$, $T_{sat} = 47^\circ\text{C}$, $d = 1.4$ mm. Figure 7: Flow boiling pressure drop in function of quality for R134a, Lu et al. [7], $T_{sat} = 10^\circ\text{C}$, $q = 11.4 \text{ kW/m}^2$, $d = 3.9$ mm: a) $G = 200 \text{ kg/m}^2\text{s}$, b) $G = 400 \text{ kg/m}^2\text{s}$. Figure 8: Flow boiling pressure drop in function of quality for R1234yf, Lu et al. [7], $T_{sat} = 10^\circ\text{C}$, $q = 11.4$ kW/m$^2$, $d = 3.9$ mm: a) $G = 300$ kg/m$^2$s, b) $G = 500$ kg/m$^2$s. 3 Heat transfer in phase change The heat transfer correlation applicable both to the case of flow boiling and flow condensation: \[ \frac{\alpha_{TP}}{\alpha_l} = \sqrt{(\Phi^2)^n + \frac{C_1}{1 + P_1} \left( \frac{\alpha_{TP}}{\alpha_l} \right)^2}. \] (23) In case of condensation the constant \( C_1 = 0 \), whereas in case of flow boiling \( C_1 = 1 \). In Eq. (22) \( B = q_w / (Gh_{lv}) \) and the correction factor is \[ P_1 = 2.53 \times 10^{-3} \times \text{Re}_l^{1.17} \times \text{Bo}^{0.6} \times (\Phi^2 - 1)^{-0.65}. \] (24) In the form applicable to conventional and small-diameter channels, the modified Muller-Steinhagen and Heck model is advised, Mikielewicz et al. [2] \[ \Phi^2 = \left[ 1 + 2 \left( \frac{1}{f_l} - 1 \right) \text{Con}^m \right] (1 - x)^{\frac{1}{3}} + x^3 \frac{1}{f_{lz}}. \] (25) The exponent at the confinement number \( m \) assumes a value \( m = 0 \) for conventional channels and \( m = -1 \) in case of small diameter and minichannels. Within the correction factor \( P \) the modified version of the Muller-Steinhagen and Heck model should be used, however instead of the \( f_{lz} \) a value of the function \( f_l \) must be used. In (24) \( f_l = (\rho_L / \rho_G) (\mu_L / \mu_G)^{0.25} \) for turbulent flow and \( f_l = (\rho_L / \rho_G)(\mu_L / \mu_G) \) for laminar flows. Introduction of the function \( f_{lz} \), expressing the ratio of heat transfer coefficient for liquid only flow to the heat transfer coefficient for gas only flow, is to meet the limiting conditions, i.e., for \( x = 0 \) the correlation should reduce to a value of heat transfer coefficient for liquid, \( \alpha_{TPB} = \alpha_L \) whereas for \( x = 1 \), approximately that for vapor, i.e. \( \alpha_{TPB} \cong \alpha_G \). Hence \( f_{lz} = \alpha_{GO} / \alpha_{LO} \), where \( f_{lz} = (\lambda_G / \lambda_L) \) for laminar flows and for turbulent flows \( f_{lz} = (\mu_G / \mu_L)(\lambda_L / \lambda_G)^{1.5}(c_{pL} / c_{pG}) \). The pool boiling heat transfer coefficient \( \alpha_{PB} \) is calculated from a relation due to Cooper [18]. The correctness of the calculations was compared due to experimental data and the own correlation (22). A few examples of comparisons are presented in Figs. 9,10 for flow boiling of R134a and R1234yf. Presented next is a comparison of selected correlations for calculations of flow condensation with the model presented earlier, Figs. 11,12. Figure 9: Flow boiling heat transfer coefficient in function of quality for R600a Copetti et al. [17]; $T_{sat} = 22$ °C, $d = 2.6$ mm; a) $G = 240$ kg/m$^2$s, $q = 95$ kW/m$^2$; b) $G = 440$ kg/m$^2$s, $q = 44$ kW/m$^2$. 4 Conclusions In the paper presented is a model of annular flow to incorporate the non-adiabatic effects in predictions of pressure drop and heat transfer for the Figure 10: Flow boiling heat transfer coefficient in function of quality for R290 Wang et al. [8]; $G = 73$ kg/m$^2$s, $d = 6$ mm; a) $T_{sat} = 14.1$ °C, $q = 53.2$ kW/m$^2$; b) $T_{sat} = 35$ °C, $q = 44$ kW/m$^2$ Figure 11: Flow boiling heat transfer coefficient in function of quality for R32 Matkovic et al. [6]; a) $G = 600 \text{ kg/m}^2$, $T_{sat} = 14.1^\circ\text{C}$, $d = 0.96 \text{ mm}$; b) $G = 100 \text{ kg/m}^2$, $T_{sat} = 40^\circ\text{C}$, $d = 8 \text{ mm}$. Figure 12: Flow boiling heat transfer coefficient in function of quality for R134a Bohdal et al. [4]; a) $G = 300 \text{ kg/m}^2$, $T_{sat} = 41.5^\circ\text{C}$, $d = 3.3$ mm; b) $G = 498 \text{ kg/m}^2$, $T_{sat} = 42.35^\circ\text{C}$, $d = 1.94$ mm. case of flow boiling and flow condensation. The model is general, and is applicable to flow boiling and flow condensation. As a result of the model the expression for modification of interface shear stress has been postulated. In effect the modification is presented in relation to quality. The model can be included into any two-phase flow multiplier definition. In the present work such model has been incorporated into authors own model, which is a modification of the Muller-Steinhagen and Heck model. The comparison of predictions of boiling and condensation pressure drop and heat transfer coefficient inside minichannels have been presented together with the recommended correlations from literature. Calculations show that the model outperforms other ones, is universal and can be used to predict heat transfer due to flow boiling and flow condensation in different halogeneousand natural refrigerants. Acknowledgments The work presented in the paper has been partially funded from the statute activity of the Faculty of Mechanical Engineering of Gdańsk University of Technology in 2014. Received 18 June 2014 References [1] Mikielewicz J.: Semi-empirical method of determining the heat transfer coefficient for subcooled saturated boiling in a channels. Int. J. Heat Transfer, 17(1973), 1129–1134. [2] Mikielewicz D., Mikielewicz J., Tesmar J.: Improved semi-empirical method for determination of heat transfer coefficient in flow boiling in conventional and small diameter tubes. Int. J. Heat Mass Trans. 50(2007), 3949–3956. [3] Mikielewicz D., Mikielewicz J.: A common method for calculation of flow boiling and flow condensation heat transfer coefficients in minichannels with account of nonadiabatic effects. Heat Tr. Engng. 32(2011), 1173–1181. [4] Bohdal T., Charun H., Sikora M.: Comparative investigations of the condensation of R134a and R404A refrigerants in pipe minichannels. Int. J. Heat Mass Trans. Issue 9-10, 2011, 1963–1974. [5] Cavallini A., Censi G., Del Col D., Doretti L., Longo G.A., Rossetto L.: Condensation of Halogenated Refrigerants inside Smooth Tubes. HVAC and Res., 8(2002), 429–451. [6] Matkovic M., Cavallini A., Del Col D., Rossetto L.: Experimental study on condensation heat transfer inside a single circular minichannel. Int. J. Heat Mass Trans. 52(2009), 2311–2323. [7] LU M-C, TONG J-R, WANG C-C.: *Investigation of the two-phase convective boiling of HFO-1234yf in a 3.9 mm diameter tube*. Int. J. Heat Mass Trans. **65** (2013), 545–551. [8] WANG S., GONG M.Q., CHEN G.F., SUN Z.H., WU J.F.: *Two-phase heat transfer and pressure drop of propane during saturated flow boiling inside a horizontal tube*. Int. J. Refrigeration (2013). [9] THOME J.R., EL HAJAL J., CAVALLINI A.: *Condensation in horizontal tubes. Part 2: New heat transfer model based on flow regimes*. Int. J. Heat Mass Trans. **46** (2003), 3365–3387. [10] SUN L., MISHIMA K.: *Evaluation analysis of prediction methods for two-phase flow pressure drop in mini-channels*. Int. J. Multiphase Flows **35** (2009), 47–54. [11] ZHANG M., WEBB R.L.: *Correlation of two-phase friction for refrigerants in small-diameter tubes*. Exp. Therm. Fluid Sci. **25** (2001), 3–4, 131–139. [12] MÜLLER-STEINHAGEN R., HECK K.: *A simple friction pressure drop correlation for two-phase flow in pipes*. Chem. Eng. Progress **20** (1986), 297–308. [13] TRAN T.N., CHYU M.-C., WAMBSGANSS M.W. FRANCE D.M.: *Two-phase pressure drop of refrigerants during flow boiling in small channels: an experimental investigation and correlation development*. Int. J. Multiphase Flow **26** (2000), 11, 1739–1754. [14] MIKIELEWICZ J.: *Influence of phase changes on shear stresses at the interfaces*. Trans. IFFM, **76** (1978), 31–39 (in Polish). [15] KUTATELADZE S.S., LEONTIEV A.I.: *Turbulent Boundary Layers in Compressible Gases*. Academic Press, NY 1964. [16] WALLIS G.B.: *One Dimensional Two-Phase Flow*. McGraw-Hill, 1969. [17] COPPETTI J.B., MACAGNAN M.H., ZINANI F., KUNSLER N.L.F.: *Flow boiling heat transfer and pressure drop of R-134a in a mini tube: an experimental investigation*. Exp. Therm. Fluid Sci. **35** (2011), 636–644. [18] COOPER M.G.: *Saturation nucleate pool boiling: a simple correlation*. Int. Chem. Eng. Symposium 1, **86** (184), 785–793.
ORDINANCE NO. 286 AN ORDINANCE AUTHORIZING THE ISSUANCE OF $52,000.00 IN REVENUE CERTIFICATES OF INDEBTEDNESS BY THE CITY OF WEST MIAMI, DADE COUNTY, FLORIDA, TO BE SECURED BY A PLEDGE OF REVENUES FROM EXCISE TAX ON PURCHASES OF CERTAIN UTILITIES, AND PROVIDING FOR PAYMENT THEREOF. WHEREAS, under date of September 21, 1966, through the adoption of Ordinance No. 276, the Council of the City of West Miami, pursuant to FS 167.431 imposed an excise tax on purchases of electricity, metered gas, and bottled gas in the City of West Miami, Florida; and WHEREAS, under the provisions of said ordinance the City of West Miami is to receive a tax of three percent (3%) on purchases of electricity, metered gas, and bottled gas sold within the City, all as more specifically set out in said ordinance; and WHEREAS, the City has contracted for the purchase of certain real estate for use and development as a park; and WHEREAS, the City has applied for and received a commitment for a grant of Federal assistance under Title VII of the Housing Act of 1961 as amended to reimburse the City for a portion of the cost of acquisition of said real estate; and WHEREAS, sufficient funds are not available to the City and it must borrow such funds and desires to obtain same by the issuance of Revenue Certificates of indebtedness to be secured by the revenues of the aforementioned excise tax; and WHEREAS, the Merchants Bank of Miami has agreed to lend $52,000.00 to the City by the purchase of Revenue Certificates of indebtedness as hereinafter set forth; and WHEREAS, the City is authorized under the provisions of its charter and the statutes and laws of the State of Florida governing the powers and authority of incorporated municipalities to contract for and to purchase land for park purposes in said City and to issue certificates of indebtedness for the payment thereof payable from the source for which provision is made in this ordinance. NOW, THEREFORE, Be It Enacted by the Mayor and Town Council of the City of West Miami, Dade County, Florida: Section 1. That the Council has made due investigation and has ascertained and hereby formally finds and recites that the annual amount to be derived by said City from the payments required to be made under the terms of the excise tax described in the preamble hereto, if continued in the amounts now being derived therefrom, will be fully sufficient to pay principal of and interest on the revenue certificates hereinafter authorized and to carry out all of the requirements of this ordinance. Section 2. That for the purpose of paying the cost of the land acquisition described above including the payment of all costs properly incident thereto and to the issuance of the revenue certificates, there be issued the revenue certificates of the City of West Miami (sometimes hereinafter referred to as "the City") in the total aggregate amount of $52,000.00, which revenue certificates are hereinafter sometimes referred to as "the certificates". The certificates shall be dated July 1, 1967, shall be in the denomination of $4,333.34 each, shall be numbered 1 to 12, inclusive, shall be payable in lawful money of the United States of America as to both principal and interest at the Merchants Bank of Miami, in West Miami, Florida, shall bear interest until paid at the rate of five percent (5%) per annum, payable October 1, 1967, and quarterly thereafter on the same days of January, April, and July of each year, and shall mature serially in numerical order on June 30 of each of the years as follows: | Certificate Numbers | Amount | Year | |---------------------|------------|------| | 1 | $4,333.34 | 1968 | | 2 | 4,333.34 | 1968 | | 3 | 4,333.34 | 1969 | | 4 | 4,333.34 | 1969 | | 5 | 4,333.34 | 1970 | | 6 | 4,333.34 | 1970 | | 7 | 4,333.34 | 1971 | | 8 | 4,333.34 | 1971 | | 9 | 4,333.34 | 1972 | | 10 | 4,333.34 | 1972 | | 11 | 4,333.34 | 1973 | | 12 | 4,333.34 | 1973 | Any or all of said certificates may be prepaid in part or in full at any time without penalty. Section 3. That said certificates shall be signed by the Mayor of the City, shall be attested by the City Clerk and shall have impressed thereon the corporate seal of the City of West Miami. Section 4. That the certificates shall be in substantially the following form: (Form of Certificate) UNITED STATES OF AMERICA STATE OF FLORIDA COUNTY OF DADE CITY OF WEST MIAMI EXCISE TAX REVENUE CERTIFICATES Number_________________________ $4,333.34 The City of West Miami, in Dade County, State of Florida, for value received hereby promises to pay to bearer, solely from the special fund provided therefor as hereinafter set forth, on the 30th day of June, 1968, the principal sum of Four Thousand Three Hundred Thirty Three and 34/100 Dollars ($4,333.34) and to pay from said special fund interest thereon at the rate of five percent (5%) per annum from date hereof until paid, payable October 1, 1967, and quarterly thereafter on the same days of January, April, July and October of each year, such interest to the maturity date of this certificate to be paid as same become due. Both principal of and interest on this bond are payable in lawful money of the United States of America at the Merchants Bank of Miami, West Miami, Florida. This certificate is one of an issue of $52,000.00, all of like date and tenor, except as to maturity, issued by said City pursuant to the provisions of its charter, and pursuant to an ordinance duly adopted by the Mayor and City Council of said City on June 21st, 1967, for the purpose of acquiring land for providing a park in said City. Said issue of certificates is payable solely from and secured by pledge of the revenues to be received annually by said City from the collection of excise taxes on purchases of electricity, metered gas, and bottled gas pursuant to Ordinance No. 276 adopted on September 21, 1966. For a more particular statement of the security pledged to such payment, reference is made to the aforesaid ordinance of June 21, 1967. This certificate, including interest hereon, is payable solely from the aforesaid revenues and does not constitute an indebtedness of the City of West Miami within the meaning of any constitutional, statutory or charter provision or limitation, and it is expressly agreed by the holder of this certificate that such holder shall never have the right to require or compel the exercise of the ad valorem taxing power of said City or the taxation or assessment of real estate in said City for the payment of the principal of or interest on this certificate or the making of any sinking fund, reserve or other payments provided for in the above-described ordinance. It is further agreed between said City and the holder of this certificate that this certificate and the obligation evidenced thereby shall not constitute a lien upon any property of or in the City of West Miami but shall constitute a lien only on the revenues in this paragraph described. This certificate is issued upon the following terms and conditions, to all of which each taker and owner hereof consents and agrees: (a) Title to this certificate may be transferred by delivery in the same manner as a negotiable instrument payable to bearer; and (b) Any person in possession of this certificate, regardless of the manner in which he shall have acquired possession, is hereby authorized to represent himself as the absolute owner thereof, and is hereby granted power to transfer absolute title thereto by delivery thereof to a bona fide purchaser, that is, to anyone who shall purchase the same for value (present or antecedent) without notice of prior defenses or equities or claims of ownership enforceable against his transferor; every prior taker or owner of this certificate waives and renounces all of his equities or rights therein in favor of every such bona fide purchaser, and every such bona fide purchaser shall acquire absolute title thereto and to all rights represented thereby; and (c) The City of West Miami may treat the bearer of this certificate as the absolute owner thereof for all purposes without being affected by any notice to the contrary. (d) This certificate may be paid and redeemed in whole or in part at any time without penalty. All acts, conditions and things required by the Constitution and Laws of Florida and the charter of said City to happen, exist and be performed precedent to and in the issuance of this certificate have happened, exist, and have been performed as so required. IN WITNESS WHEREOF, the City of West Miami has caused this certificate to be signed by its Mayor and attested by its City Clerk, under its corporate seal, all as of the ______ day of July, 1967. MAYOR ATTEST: City Clerk Section 5. That there is hereby created for the purpose of paying principal of and interest on the bonds herein authorized a fund to be known as the "Excise Tax Certificates Sinking Fund", which is hereinafter in this ordinance sometimes referred to as the "certificates fund". Such fund shall be kept on deposit in the Merchants Bank of Miami at West Miami, Florida, or in such other bank of equal standing and rating as may hereafter be specified by the Council. The money held in said fund shall be held by said depository as a special and not a general deposit and as a special trust fund the beneficial interest in which shall be in the holders from time to time of the obligations payable therefrom. All money in such fund shall be continually secured by the deposit of collateral security having a market value at all times of not less than the amount on deposit in such fund and shall be otherwise secured to the fullest extent required by the laws of Florida for the securing of public deposits. Beginning with the month of July, 1967, there shall be paid into the certificate fund so much of the first revenues received in each month by the City from collection of said excise tax, while any of the certificates herein authorized remain outstanding and unpaid, as may be necessary to pay promptly as they fall due principal of and interest on the certificates herein authorized. The amounts to be paid into the certificate fund during the period July 1 through June 30 of any year shall be only that amount necessary and sufficient to pay the installments of interest becoming due on outstanding certificates on the interest dates provided therein coming due during such period, and only that amount necessary to pay any certificates maturing on June 30 of such period. In the event of prepayment of any outstanding certificates the City shall not be required to pay into the certificate fund amounts for the payment of interest or principal on such prepaid certificates during the one year period immediately preceding the original maturity dates of such prepaid certificates. To the extent that the franchise revenues should prove at any time insufficient to make the payments hereinabove required to be made, the City agrees that it will make up such deficits from the proceeds of other revenues not derived from the imposition of taxes and legally available for such purpose, provided, however that nothing in this paragraph shall be so construed as to pledge to the payment of the certificates herein authorized any revenues the pledging of which would make it necessary that such certificates be approved by the qualified freeholder electors of the City pursuant to the provisions of Section 6 of Article 9 of the Constitution of Florida. Section 6. That the City of West Miami expressly covenants and agrees that it will issue no other certificates or obligations of any kind or nature payable from or enjoying a lien on or pledge of the excise tax revenues unless such certificates or obligations are issued in such manner as to be fully subordinate in all respects to the payment of the certificates herein authorized from such revenues. The provisions of this section shall inure to the benefit of and be enforceable by any holder of the certificates issued hereunder. Section 7. That the City of West Miami hereby covenants and agrees with each successive holder of the certificates issued hereunder: (a) That the City will do everything which it can legally do to maintain the excise tax ordinance in full force and effect until all of the certificates shall have been retired, that if for any reason beyond the control of the City the excise tax ordinance shall become inoperative or ineffective during such period, the City will take all possible steps for the immediate substitution therefor of a source of revenue sufficient to enable it to make the payments of principal and interest herein required. (b) That all records of the City with respect to the amounts received by the City in each month from said excise tax and the disposition made of all such revenues shall be available for inspection at all reasonable times by the holders of any of the certificates issued hereunder, and that the City will within sixty days following the close of each fiscal year supply to any holder of the certificates who may have so requested a written statement covering the receipt and disposition of such revenues during such fiscal year. Section 8. That the certificates herein authorized shall be sold to the Merchants Bank of Miami at par. The certificates shall be prepared and executed and delivered to the purchasers thereof pursuant to payment; and the proceeds thereof applied to the purposes for which the certificates are herein authorized. Section 9. That if any section, paragraph, clause or provision of this ordinance shall be held to be invalid or unenforceable for any reason, the invalidity or unenforceability of such section, paragraph, clause or provision shall not affect any of the remaining provisions of this ordinance. Section 10. That this ordinance shall be in full force and effect immediately upon its adoption. PASSED and ADOPTED this 21st day of June, 1967, by a majority of the Council of the City of West Miami, Florida, and approved by the Mayor of said City. ATTEST: City Clerk: APPROVED: Mayor PRESIDENT of City Council
Coming To Your Senses by Tarchin Hearn Karunakarma Series, Volume II Coming to Your Senses © Tarchin Hearn, 2002, published in coil bound format by Wangapeka Books Green Dharma Treasury e-book version, 2017 Karunakarma means compassionate activity, the work of compassion or compassion at work. The Karunakarma Series is a collection of coil bound notes and articles that can be used for study or as teaching aids. May these writings water the seeds of wisdom and compassion for the benefit of all beings. Other books by Tarchin in Karunakarma series: Satipatthāna: Foundations of Mindfulness – a manual for meditators vol. I Sangha Work vol. III © Tarchin Hearn www.greendharmatreasury.org Nāmo Guru Vijāya You look with greatly merciful eyes on all that live. You listen to all the stories with ears of deep understanding You touch the world with unending compassion. Your nose sifts the subtle and reveals the hidden. Your taste is in utter accord with what is. Embodying yourself in myriad forms and appearances, Teaching all beings the path of engaged, compassionate freedom. To you Avalokiteśvara I bow in devotion and gratitude again and again and again. Guru Buddha Dharmakāya Nāmo Background This booklet was originally compiled after giving a cleansing of the senses retreat in Tasmania in early 2002. I hope that it will serve as a practical manual for people wishing to explore this work on their own and that it will be a useful reminder for those who are teaching it to others. These notes will provide a guide to hands-on-direct experience. The heart message of this booklet will be revealed in the experiences you have through actually doing the exercises and meditations. The quotes at the beginning of each chapter are taken from "The Spell of the Sensuous", a marvellous book by David Abram, published by Vintage Books. The cover illustration was done by Mary Jenkins. Retrospect from 2017 When I first did these explorations with Namgyal Rinpoche in the 1970s we used the phrase "cleansing the senses". On reflection though, I think that this phrase could potentially be misleading. The idea of cleansing the senses might carry the assumption that they are somehow dirty and in need of cleaning. Though this may occasionally be the case, the phrase doesn't acknowledge the much larger project of broadening and refining our sensual contact with the world. The exercises in this booklet will help to draw attention to sensing in a very physical and emotional way. As such, they can be a valuable support for people beginning to consciously embark on the great journey of mindfulness in action. # Table of Contents *Prologue* .................................................................................................................. 5 *Day One* - Introduction ........................................................................................................... 8 - Recipe for misery .................................................................................................... 10 - General daily program ............................................................................................. 12 *Day Two* - Taste ....................................................................................................................... 15 *Day Three* - Hearing ................................................................................................................... 21 *Day Four* - Touch ...................................................................................................................... 25 *Day Five* - Touch Continued .................................................................................................... 32 - Contemplations of interbeing .................................................................................. 32 *Day Six* - Sight ....................................................................................................................... 35 *Day Seven* - Smell ...................................................................................................................... 41 *Day Eight* - Mind ....................................................................................................................... 45 *Appendix* - Daily Pūja ............................................................................................................. 46 - Daily Self Massage .................................................................................................. 47 - Breathing Meditation ............................................................................................... 47 - Walking Instruction .................................................................................................. 47 - Painting Mandalas ................................................................................................... 48 *Equipment* ............................................................................................................. 48 Prologue Waking up Bright and responsive Cultivating the ability to be totally present for another, Living each moment spacious and open with immense clarity and compassion, Resting in a place of being, where love, patience and wondrous creativity can arise with the problems and challenges in life. Of course, there’s always…. Going to sleep Walking through life with eyes dimmed, ears blocked, senses of touch registering mostly pain and discomfort or even nothing at all, taste dulled and smell muted. There’s always Withdrawing from the world in order to be more ‘spiritual,’ Losing oneself in concepts and fantasy, Sinking into the pool of Narcissus Our private self-built fiction of hopes and fears and desperate expectation, Meeting each difficulty with knee jerk reactions and inflexible agendas. Which will it be? Writing these words, I recall a few experiences that shaped me. One was a short conversation I had with my father many years ago. I was just beginning to study with Namgyal Rinpoche and was filled with all the common spiritual fantasies that were so exciting for us 1960s seekers. I remember my dad saying something along the lines that I probably thought that ‘Eastern Culture’ had profound understanding about things to do with the ‘inner’, meditation, yoga and so forth, but had lots to learn from the 'West' about the 'outer'. The West on the other hand had great mastery of the outer world through science and technology but needed to learn more about the inner from the East. I completely agreed with him but it seemed so obvious, I wondered what he was getting at. "Well," he said, "I think you've got the whole thing absolutely backwards. The great Zen masters were able to see a tree as a tree and a mountain as a mountain. They would eat when they were hungry and drink when they were thirsty. People in the West, on the other hand, have become so lost in a labyrinth of internal fantasy, unconsciously projecting their hopes and fears onto the environment, that it is almost impossible for them to see what is actually there. Westerners don't need meditation, more hours of staring into navels and contacting feelings," he said trying to stir me up. "They don't need to look within. They need to look deeply into what is actually going on 'out there', all around, in this magnificent living world!" It was a bright moment. I saw that he was right. Another experience. Wandering around the city of Toronto trying to understand emptiness (suññatā). I found that by squinting my eyes, everything became a bit fuzzy and not so solid. I walked around in this floaty, fuzzy space until one day I walked right into a telephone post and practically knocked myself out! Surviving with minor bruising, I decided that if this was emptiness then I wasn't interested in it. Either I find emptiness with my eyes open or I'd look for something else. * * * * * I first did this work of cleansing the senses on a course given by Namgyal Rinpoche in the mid 1970s. The exercises were adapted from the Western Mystery Traditions where it was considered that before one could explore the Mysteries of Mind and Nature, it was necessary to have a healthy and well functioning body and this included well functioning senses. Since that first course, I have taught this work a number of times and in the process, found the emphasis and even the methods evolving in slightly different directions. Some things have been left out and other things have been added. Although it was originally called cleansing the senses, the main focus has gradually shifted towards exploring the nature of what is, through the senses. It is best to do this work in a place of natural beauty and in full retreat. Seven days is about the minimum time to go through the five senses, though to take it at a more leisurely pace would allow for more contemplation. Some people have done this work in cities but I don't generally recommend it since to open our senses and then to immediately expose them to traffic smells and noise can be challenging to say the least! In an actual retreat we usually began with pre-breakfast meditation or Puja. After breakfast we would meet for a class in which I would introduce the work for the day. After a short break we would reassemble as a group to do the sense cleansing work. After lunch there is some time for individual exploration. In the late afternoon we would do walking meditation together in the forest. The evenings were open for individual work. Most of the retreats I have given with this theme have been in silence except for the talking necessary in the morning group work. May your explorations blossom Guiding you and all that you meet On the path of spacious wonderment, heart filled compassion, And feet-on-the-ground sensible intelligence. with best wishes Tarchin © Tarchin Hearn www.greendharmatreasury.org Welcome everyone. What a wonderful place to explore the senses, Dorjeling Retreat Centre - Tasmania. We are far away from traffic noise. The forests are alive with birds and other wild creatures. The sky is vast and open with beautiful cloudscapes; a constant shifting of subtle colour and forms. Over the next few days, I hope you will come to appreciate something that is simple, rare and possibly a bit old fashioned. I think of it as the treasure of solitude. For most people, life seems to be a torrent of busyness. Rushing here and there with schedules and appointments. Phones ringing. Computers humming. Surrounded by brick and cement and human made noises and smells. Hardly a moment to pause and --- 1The six perfections are: generosity, wholesome relating, patience, enthusiastic perseverance, concentration and wisdom. breathe and feel our intermingling with the living earth; our inseparable interdependency with all other creatures. The treasure of solitude is something that most people today know nothing about. Even the idea of solitude to many is a bit scary. It sounds too much like loneliness. Some of you will have rushed to get here and are still probably hurrying to begin the retreat. I really do wish we had a month together. Then I would suggest that you spend these first few days simply resting, eating, going for walks, and gradually winding down. Solitude invites opening. As the all too human compulsion to talk gradually softens and fades, we inevitably begin to feel the communication that’s happening with the non-human creatures that surround us at every level of being. Solitude is not loneliness. In resting, without all those habitual obligatory human interactions, you might discover that you are never alone; that you are in deep and continuous communion with myriad forms and expressions of life. Actually, you are the communion of myriad expressions of life. Allow yourself to accept the invitation. Enter the treasury and discover yet again, the richness that is always present. Solitude is from solo which means oneness; union. Today I’m going to give you a number of meditative exercises to get started with. However, as you settle into the retreat and allow the swirl of busyness to fall away, your meditation, in the sense of trying to focus attention on a particular theme, may also begin to fall away. You may find yourself pulled into the work through sheer interest and insatiable curiosity. The exercises may become natural and effortless and you might even get a glimpse of true contemplation. I like to think the word contemplation comes from con which means ‘with’, plus template. A template is something you might find in a factory. You could think of them as stencils used for cutting out or moulding objects into particular shapes. For this process to work, the material being moulded has to be softer than the template. You enter contemplation by becoming so soft and malleable that nature can template you! In other words you become shaped by the reality of unfolding life rather than by your ego conditioned hopes and fears. Contemplation is sensing in action. The sense doors are our gateways to NOW, and to the mystery of other. In a way, the experience of sensing is one of communion and mutual transformation. Us responding to the world and the world responding to us. Each morning we will give a lot of care and attention to one particular sense and then spend the rest of the day being in nature and allowing nature to contemplate us! © Tarchin Hearn www.greendharmatreasury.org This course can be done in one week, but if you had the time and interest, you could easily spread it over a month or so devoting a week to each sense. **A Recipe for Misery** I’d like to begin by teaching you a recipe for making yourself miserable? Are you surprised? A long time ago I decided that it was important to give out exercises that people were able to have some success in doing. This helps build confidence. Buddhism is supposedly for the purpose of bringing suffering to an end and when I taught this, many people felt that bringing suffering to an end was virtually impossible and so their frustration increased. It seemed to me, that since so many people were already good at making misery, both for themselves and others, that I should begin by teaching the fool proof method for cooking up a first rate batch of misery! This was something that nearly everyone found they could do very well and so they could grow in confidence. The process is quite simple. It takes only three steps. If you follow them carefully, and in order, it’s impossible that you can’t make yourself utterly miserable! **Step one:** Decide that there is something unsatisfactory in your life. Most people have no problem doing this. It could be something physical or something emotional. It could be a situation in the world. There are unending possibilities. **Step two:** Focus on this unsatisfactory situation to the exclusion of virtually everything else. Many people think meditation is too hard. Their attention wanders all over the lot. Yet when it comes to the ‘yoga of misery’ . . . nearly everyone is a master of *samādhi*!² Focus on this unsatisfactory thing to the exclusion of everything else and allow it to become an all consuming obsession. **Step three:** Make a deep, (preferably unconscious) decision that you will never feel good until you have resolved this particular problem. This last step is the clincher. If you can develop an abiding confidence that you will never feel good till you’ve sorted this problem out, you will have stewed yourself into a magnificent mess of misery and in all probability, you’ll find that you can share it with all those around you! --- ² Samādhi has a number of meanings such as absorption or one-pointedness. A degree of samādhi is present whenever we are completely, effortlessly focused on something. Let's review for a moment. (1) Decide there is something unsatisfactory. (2) Focus on it to the exclusion of everything else. (3) Believe that you will never be happy until this particular difficulty has been resolved. Does it sound familiar? We've all done it at one time or another. It is astonishing how people can be so obsessed with problems that they fail to see what is actually going on around them, even when they are surrounded by beauty and all sorts of creative possibilities. You'll be glad to know there is a way of dissolving misery and it too comes in three steps. **Step one:** Make a decision that you are willing to let go of this obsession. This first step doesn't mean that we do actually let go. If it was so easy to resolve, it probably wasn't much of a problem in the first place. Step one simply means that we would be willing to let go, if only we were able; if only we knew how. This step may sound simple but the sad truth is that many people seem to need to hang on to their problem. It's become part of their identity, part of who they are; the persona they present to the world. They're not yet willing to let go of it. **Step two:** Open all your senses, perhaps one by one, and become aware of what is going on around you. Open your eyes and see what is around you. Open your ears and listen intently to the play of sounds. Open your various senses of touch and feel the shifting textures and temperatures; clothes on your skin, feet on the floor, breezes on your face. Open your tongue to taste and your nose to the various fragrances; invisible silent intimate messages from other beings. Yes, there is a world out there, and it knows you are here! **Step three:** Reflect on how the experiences arising through your sense doors right at this very moment are supporting and nurturing your sense of who, and what, and where you are. If you recall a time when you've been miserable you will recognise that your sensing of the outer world was probably quite subdued and withdrawn. Everything retreats into a cocoon of inner feeling/emotion. Sometimes we withdraw to the point of hardly noticing anything that is going on around us. How do we emerge from the cocoon? From time to time, this week, I want you to think about this recipe for misery as you open up the senses. It may come as a shock to find that sometimes we are afraid to open up. It’s safer to stay in the world of our private fantasies. So hard done by, unloved, neglected, abused, long-suffering. To actually open our senses and see the living world around us might wreck the scenario. The sunlight streaming through the window. The scent of roses blending with the humming of bees gathering early morning pollen. I’m alive. The world is wondrously transforming. Everything is fluid and changing. Living reveals itself to be an immense experimental adventure. Today, we are still arriving. I suggest you spend as much time as you can outside. Go for a walk. Sit under a tree. Go down to the lake. Give yourself the space to relax. Breathing in, the earth supports me. Breathing out, sharing deeply. Open your senses and allow this beautiful natural world to touch you. Breathing in, bathing in beauty. Breathing out, sharing deeply. Some of you have been in a great rush to get here. Allow yourself to have a snooze. Say hello to the trees, the birds, the earth, the sky. Smell the leaves. Feel the breezes on your face. Do a little meditation. Allow yourself to arrive. One small practicality, as we will be doing a lot of massage work this week, you might want to trim your finger nails today. **General Daily Program** In addition to our group work of cleansing the senses which we will do after the morning class, I’d like you to explore the following meditations and exercises throughout the retreat. Think of these as the basic exercises for the week. If you have any doubt as to what to do, you can work with these. 1) **Pūja** Before breakfast, we will do the Daily Pūja together. *(See the appendix for more on Daily Pūja)* This will bring to mind, many of the profound themes and contemplations, that support awakening, both for yourself and for others. 2) **Massage** Each day give yourself an overall massage. You can take as much time as you like with this or you can do it in as little as 15 minutes. Begin with your feet and work your way up to your head. When you have finished, sit outdoors and settle into awareness of your breathing. Once you come to a point of stillness and clear awake presence, then open all your senses to what is happening within and around you and continue resting in this natural, bright, awareness for as long as you wish. 3) Basic Meditation Awareness of breathing with all the senses open and operating will be our basic meditation exercise for this retreat. In case you are unfamiliar with this practice, here are some general instructions. Take up a posture that supports a sense of easefulness and alertness. You could be sitting or kneeling or lying down or even walking or standing. The important thing is to feel relaxed and alert. Next, spend a few moments reviewing your aspiration at two levels. One level is your general overall life intention and the other is your specific intention for this particular session. By way of a general intention/aspiration, you might refresh your intention to live according to the Bodhisattva Vow – to meet whatever arises with kindness and interest; to touch any difficulties with patience and lovingkindness; to explore the dharmas (the truth or phenomena life and living) deeply and thoroughly, and to recognise and settle into the mystery of interbeing – the interdependence of everything. As a more specific intention you might remind yourself of the particular technique you are about to practice. In this case it involves attending to the experiences referred to by the words: smiling, breathing and sensing. Having refreshed your aspiration, allow a smile to brighten your face. Let the pleasure/release, the twinkle in your eyes, seep into your bones and at the same time, begin to feel the physical movements of your body breathing. Practice in a light and easeful way. In this type of work we don’t try to control the breathing. We simply relax and allow it to find its own natural rhythm. With a caring attentiveness explore this breathing body, making friends with whatever sensations arise be they pleasant or unpleasant. Note how thoughts and emotions affect the body and how the state of the body affects your thinking and feeling. Everything is interconnected. If your attention wanders to other things, patiently bring it back to this study of your living, breathing body. Eventually you will come to a state of deep calm where the body feels soft and pleasurable, the breathing is fluid and effortless, and the attention is bright and awake. Now, without losing this intimate awareness of the physical sensations of breathing (which is essentially an ongoing awareness of touch), expand your awareness to include the other senses. Open your eyes. Don’t try to draw anything in but at the same time don’t try to keep anything out. Simply gaze ahead in a natural fashion and notice whatever visual forms arise without having to latch on to them. Open your ears and appreciate whatever sounds are happening. Again don’t try to hold onto any particular sounds or to keep any sounds out. Rather allow your ears to function freely, noting the arising and passing of the whole wonderful symphony of ongoing life. Open your nostrils and become aware of any smells. Finally, even if you are not eating anything, open your awareness of taste. At this point you are resting in a deep appreciation of breathing with the five sense doors open and engaged. Stay with this for the remainder of the session. If you get distracted by a particular aspect of the sensing, if you get high jacked into story making, memories and associations, then withdraw your attention from eyes, ears, tongue and nose, and give all your attention to touch, the sensations of your body resting and breathing. Smiling, breathing and sensing are the first three steps of the five step “Cycle of Samatha”. A few years ago, we brought out a small booklet with this title. If you are already familiar with the Cycle, you may find it a useful tool this week. 4) Walking Meditation Every afternoon at four p.m. we will do a meditative forest walk together. At other times in the day you may enjoy doing walking practice on your own. There are many different types of walking meditation. The kind we will practice here has four basic points that will help support a sense of bright, alert, presence. 1) Smiling. 2) Carrying a continuous awareness of breathing. 3) Being aware of the physical sensations of your body moving through space. 4) Being aware that with each step, you are treading on and being supported by innumerable living beings. These four points form our basic exercise for forest walking. Often I add something each day to enhance the walking exploration. I’ve listed some of them in the appendix. 5) Creating Mandalas This last exercise (which actually merits a full retreat in itself) is to paint or create mandalas of the inner sensations and experiences that arise during your explorations. Of course you can use whatever medium you wish. Creating mandalas often helps us --- 3 For further instruction in walking meditation see http://greendharmatreasury.org/wp-content/uploads/2017/02/Walking-in-Wisdom-2nd-ed-e-pub.pdf contact and understand experience in new, non-verbal ways. (See the appendix for a bit more detail.) **Day Two - Taste** *To touch the coarse skin of a tree is, at the same time, to experience one’s own tactility, to feel oneself touched by the tree.* - David Abram Today we will begin to explore taste. In a way it is the grossest level of sensing in that physical substances, chunks of food and pourings of drink, collide with taste buds located on the surface of the tongue. Think of taste as a chemical analysis sense, identifying the incoming molecules so that your stomach and digestive system are ready to receive them. Taste and digestion are so intimately tied together that some of the initial stages of digestion, of sugars for example, actually begins in the mouth before the food even reaches your stomach. Much of the pleasure associated with taste actually arises with smell. Strange as it may sound, for many people, taste is the least conscious of all the senses. In your meditation today, I’d like you to explore the possibility that your entire body is involved in tasting. Imagine chemical substances arriving at the cell membranes; the border zones between countries of flesh. The ‘customs officer’ demands: “Who are you? Yes you can come in but you’ll have to transform a bit. We’ll send out some enzymes to help you.” Or perhaps it’s, “No this isn’t your place, try further down the hall.” Can you sense the multidimensional symphony of chemical conversations playing in the membranes of cells, in the corridors of the blood, in the intercellular fluids? Imagine your entire body is composed of trillions of tongues, each tasting the substance of present moment; savouring the infinite transformations of now. If our sense of taste is malfunctioning, our digestion will inevitably be disturbed. Many years ago I heard of a case involving a young boy who accidentally drank some scalding water and so damaged his throat that he had to spend time in intensive care being fed through a tube. In spite of giving him what was considered a balanced diet, he steadily lost weight. After trying every thing they could, someone finally had the wisdom to ask the lad what he would like to eat. He wanted a Big Mac and a milkshake. Feeling that they had nothing to lose, they blended up the hamburger and milkshake and poured it down the tube and in addition they put some of the burger into his mouth so that he could chew it and taste it even though because he couldn’t swallow, he had to spit it out. To everyone’s amazement, after a few days of this kind of feeding, he began to gain weight. It became obvious to the dietician that without tasting the food, his stomach wasn’t able to receive it properly. Undoubtedly, the pleasure of tasting something he liked was an important factor. Even though he had been fed a so called balanced diet, his stomach wasn’t able to secrete the right combinations of enzymes and so wasn’t able to digest it. Our taste sense is also a protective mechanism. When we are in touch with our physical organism in a deep and sensitive way, we can actually taste which substances will be good or bad for us. When this sense has been neglected, through choosing our foods according to fads or what is trendy or according to our addictions and compulsive hankerings, we often lose the fine discriminating taste/wisdom of a beautifully functioning organism. This can lead to a vast range of health problems. A person with no taste doesn’t just refer to someone who likes different music from you. A person with no taste has lost a profound level of discrimination and is in grave danger. If we wanted to do a full cleansing of taste we would begin with a purification fast. A simple approach to this is to totally fast from solid foods. During this time you should drink only water or weak herb teas. Then, after two or three days, begin to eat brown rice with a little sesame oil and sea salt. The rice should be slightly undercooked – *al danté*. With this diet you can eat as much as you like but you must chew the rice until it is mush before swallowing. Continue with the rice and herb tea/water diet for about 5 more days. Then begin to add other foods to your rice; one new food per meal. When you introduce a food, first hold it in your mouth and meditatively feel how your over all organism reacts to it. If it doesn’t feel good then spit it out and don’t eat it. If you have a positive response then begin to chew the food with great mindfulness. Explore what it is doing to you and feel how you as a physical and emotional organism are responding to it. Swallow with mindfulness. Enjoy with mindfulness. Digest with mindfulness and learn what each food does to your overall being. This kind of aware eating should be seen as a basic fundamental in all healthy diets. Let’s have a break. Then we will come back together and begin to explore. **Session on Taste** If possible it is good to do this work outside. Everyone should bring cushions so they can sit comfortably. Check to make sure you have all the equipment that you will need\(^4\). Some things the instructor provides; other things, each participant needs to bring. For the first part of this morning’s work you will need a partner. 1) **Magnifying Glass Meditation** Using a magnifying glass, examine your partner’s lips, tongue and teeth. Examine the different textures of flesh, the papillae on the tongue and the secretions of saliva. The human body has been called a ‘City of Revelations’. We live in it and use it every day, in a way we are it, and yet we hardly know anything about it. Explore with the magnifying glass for about 5 to 10 minutes, then switch around so the other person can look. 2) **Massage** Before we begin the massage of the mouth, I’d like to say a few words about the cleansing work in general. In terms of written instructions, I can only outline the process. In an actual situation the teacher or instructor needs to pace the work, offer encouragement and remind people of the over all direction of the exercise if they get lost in some detail. If you have done this work before, you may already have a sense of the timing. If you are doing it fresh from the book you will have to find your own way. I usually do the massage work along with the people while at the same time giving the instructions. Talking and dribbling! Doing the massage reminds me of what the people are feeling and gives a better sense of how to regulate the timing. **Please Note:** *These explorations of the senses, can be very powerful. They can bring up all sorts of reverberations from childhood. Even though most of the time we will be working on ourselves, it can still be extraordinarily intimate; taking people into new levels of meeting themselves. Because of this, it is vital that the teachers or instructors have done the whole process on themselves preferably a number of times. It is also important that they have the experience and confidence to be able to help beings stay with strong emotions in a meditative and mindful way.* \(^4\) See the appendix for a list of equipment needed for each sense. If you have any doubts at all as to what the massage may be evoking, DO IT TO YOURSELF AND FIND OUT! Even though I have lead this type of exploration many times, I still do a short session on my own before doing it with the group to remind me of the details. In this work the intellect doesn’t necessarily remember as much as the tongue does! Okay, let’s begin with taste. In this exploration, each person will work on themself. 2a Cleaning the Mouth Clean your mouth and teeth with water and baking soda. Wet your toothbrush and then dab a bit of baking soda on to it and give your teeth, gums and tongue a thorough scrub. Then use your fingers to massage your gums. 2b Rinse Rinse your mouth with salt water. Gargle a bit and then spit it out. 2c Massage with Tongue Massage the inside of your mouth with your tongue for about 2 to 3 minutes. 2d Main Massage For massage oil, use your own saliva (spit on your fingers). We first begin massaging the outside of the mouth gradually working towards the inside. At the very end we will reverse the process, coming out using a wedge of lemon to stimulate and refresh the mouth. Throughout the entire massage process remember to soften your stomach and abdomen. I often remind people of this during the session. There is an intimate connection between what is happening in your mouth and what is happening in your stomach. Follow the order outlined here and take lots of time with each section. Explore the textures of muscle and flesh. Be very sensitive to the many subtle and varied responses of your body to these new sensations. One practical point; you may want to wrap a towel around your neck and shoulders, like a giant baby bib, as you will probably salivate a lot. Then you can feel free to just dribble away! a - Explore the muscle and bone structure particularly in the jaw and cheekbones. Pull the flesh of the cheeks. Feel the shape of your gums and teeth under your lips. b - Begin to explore the lips. You’ll need to keep them wet. Feel the different textures between the inside and outside. c - Explore the inside of the lower lip and cheeks. Remember what they looked like through the magnifying glass. You’ll probably have fingers inside your mouth and a thumb outside, gently kneading, stretching and stimulating. You may find it helpful to use both hands. d - Continue with the inside of the upper lip and cheeks. e - Massage the gums both inside and outside and gradually include the upper and lower palate. Feel the different textures of flesh inside your mouth. f - Explore your teeth. g - Very sensitively massage the area under your tongue. h - Massage your tongue. 3) **Exit with Lemon** Retrace the preceding steps ( h) back to (a) but use a wedge of lemon to rub and massage your mouth as you slowly come out. 4) **Vicco Powder** Place some Vicco Tooth Powder on your finger and thoroughly massage your gums. This is very good for the gums and will also stimulate and refresh your mouth. If you can’t obtain Vicco Powder then move on the next step. 5) **Lemon** Briefly massage your gums and lips using the inside of a piece of lemon peel (the white pulpy part). Since people will probably finish at different times, I usually ask them to sit in meditation and wait for the others to complete before giving the following instructions. Afterwards At this point your mouth should feel quite clean and fresh. Between now and lunch take the opportunity to sit or lie down and explore the following meditations. 1 - Relax your mouth and as your inhale, really taste the air. You might try breathing in and out through both mouth and nose simultaneously. Explore this for a while. Become a reptile with a tasting tongue. Taste the messages coming in through the air. 2 - Do a body scan. Using the mouth/nose breathing, become very still and go deeply into the sensations you find in your mouth. After about 5 minutes move to your throat and explore the sensations arising there, then the stomach, then the intestines and finally the anus. Give about 5 minutes for each position. Then feel the whole digestive tract, one completely integrated, intelligent, living system. Open up to any feelings, associations, emotions or memories that arise in a particular area. This exploration may lead to some mandala painting. 3 - Imagine your entire body is a manifestation of taste/wisdom. Sitting in the midst of this, see if you can taste the earth and trees and sky. Become very still and allow the wisdom of the organism to function. 4 - Eat and drink with the whole body tasting, a deep communing of inner and outer. Relax after the meal and observe the digestion, the continuing tasting. Where does the taste actually take place? Are there parts of your body not involved in the tasting? 5 - Try to do a short mouth cleansing each day that you are here. This afternoon at 4 pm we will do some forest walking together. Enjoy your explorations. Today we begin to explore hearing. Let’s sit together for a few minutes. Feel the sensations and movements of your body breathing. Imagine that every cell of your body is an ear. Every part of you is listening. I will ring the bell. Where do you hear the sound? In your ears? Bang! . . . the door slams and everyone jumps. How did you hear that sound? With your mind? With your body? Sit with the breathing again. Become very still as if you were listening to a very faint whisper. It’s almost as if you were holding your breath. Can you hear/feel the singing of your body? Explore the how sounds shape the texture of consciousness. Explore how your state of mind affects what, how and the way you hear. Hearing is not just about sound, it’s also about information. Listen with sensitivity and notice how different sounds cause changes in your body; changes in your form. Listening is a constant process of in-forming. This is real in-formation – a process. If you don’t change, if your form doesn’t shift, perhaps you didn’t hear. Years ago when first studying with Namgyal Rinpoche in Toronto he gave us a very interesting exercise. He suggested we walk around the block with a portable tape recorder turned on. When we returned home we had to recall all the sounds we had heard. After that we listened to the tape. We all found it amazing how many sounds there were on the tape recorder that we either hadn’t heard or simply didn’t remember. People who are new to meditation are usually shocked when they begin to realise just how much chatter is going on in their minds. Story telling, fanaticising, planning, reviewing, some people are so focused on their inner dialogues that they hardly notice all the potential information that is thrumming around them. In the midst of thousands of beings singing to us, humming to us, buzzing to us, chattering to us and perhaps even speaking human languages to us, we can still feel alone, isolated and cut off! A few years ago I went to a hearing specialist hoping to buy a very good set of ear plugs. I thought that he must sell heaps of them to travellers like myself who find the city sounds too loud for decent sleep at night. To my surprise he said that he rarely was asked for ear plugs but people were getting hearing aids at younger and younger ages. He explained that teenagers were seriously damaging their ears listening to high decibel rock music. It is ironic that we are so assaulted by sound that in order to protect ourselves we are becoming deafer and deafer. Today is a day for really appreciating the treasure of solitude. After we finish cleansing the ears, see if you can be in this beautiful natural environment and really listen. Allow your trillions of ears, each and every cell, to respond to this constant dance of becoming. Listen to the swishing of the gum trees, the outrageous kookaburra, the chirping of crickets, the squawking of the cockatoos. Listen to the sound of your heart and whoosh of your blood flowing round your body. Surrender into the vast symphony that is your life. **Session on Hearing** 1) **General Massage** Begin by massaging all around the ears. Use your finger tips and with a fair bit of strength and firmness, work the sides of your head, the temples, the hinge of the jaw and in behind the ears. 2) **Corn Oil Massage** Using corn oil, massage the ears. Use your thumbs and fingers; pulling and kneading. This will draw more blood to the area. Very gradually move towards the inner surfaces of the ears. 3) **Vitamin E and Eucalyptus Oil** Using Vitamin E plus Eucalyptus oil, continue massaging every part of the ears. The vitamin E is good for the skin and the Eucalyptus will bring heat to the area. 4) **Humming** Cover your ears with the palms of your hands and make a humming sound. Feel the warmth between your hands and your ears and explore the vibration of the HUM. Where can you feel it in your body? Surrender into the warmth, the sound and vibration. This humming will help soften any wax in the inner ear. Explore this for 5 to 10 minutes. 5) **Hydrogen Peroxide** Lie on one side, with your head supported comfortably on a pillow. Have someone use an eye dropper to put a 3% solution of hydrogen peroxide into your ear. I usually do this for everyone in the group though I do it for myself when I’m cleansing my own ears. It is important to have done the peroxide to yourself before giving it to another. This way you will have some sense of how it feels. The solution can feel very cold on initial contact and for some, it can be a bit of a shock. Ear volume varies from person to person so I usually put enough in that I can just see it. After a few moments the peroxide will begin to fizz and will often carry dissolved wax out of the ear. Lay there until the fizzing stops or you feel you have had enough. Then place a swab of toilet paper over your ear, roll onto the other side and let it all drain into the paper. A note on the hydrogen peroxide. Sometimes you can only get peroxide in a 6% solution. If this is all you can find, then dilute it with an equal volume of pure water. The 3% solution is quite safe. It can even be used on open cuts as a bactericide. I have used it many times on myself and others and have never heard of problems. However, I wouldn’t use it if the ear drum has a puncture. Occasionally after dumping into the toilet paper, the ear remains blocked. Don’t worry about this as it will clear by itself after a while. 6) **Cotton Bud Swab** If you wish, you can carefully use cotton buds or Q-tips to do a final swab out. 7) **Final Massage** Do another short massage going from inside to outside. This is done without any oil. 8) **Lemon Water Rinse** Finally, rinse the ears with lemon water. This is quite refreshing and will remove any leftover oils. Afterwards After the massage work you will probably feel very open. Now go and explore listening. Listen with every cell of your body. Walk with every cell an ear. What is it like to be so open? Is this normal for you? Notice any tendencies to shut sound out. See if you can become so soft and open that the sounds flow right through. Explore how hearing is a dialogue between a sound and a listener. Where does sound end off and feeling/sensation begin? Is there a boundary? Investigate how your being is shaped by sound and by the meanings of the sounds. Explore the mystery of information. What is the sound and what is the meaning? Do you hear the meaning or is the meaning added from somewhere else? Enjoy your day! Most people think of the senses as doorways or windows through which they can see the world 'out there'. Even people who call themselves Buddhist tend to think in this way and they aspire to see the world or the object 'as it actually is'. Considering all the confusion we have about the world, and all the projections we make on it, this isn't such a bad aspiration. However, although it's an understandable aspiration, it is basically impossible! Sensing could more accurately be thought of as a creative act. How can this be so? First of all there is a continuous co-operative endeavour going on between the incoming sensory data and out going commands to the muscles controlling the sense organs. Through this dialectic we can direct our attention toward an object and then keep it in focus, even when it is moving. We can also shift our attention from one thing to another. For many people this is mostly happening unconsciously. Why do you look toward someone and then quickly glance away before they can see you looking? A huge amount of what we sense is unconsciously selected. The world 'out there' is being selected by conscious or unconscious hopes and fears and so we only see a fraction of what is there. Second of all you have never seen a tree or touched the ground. Photons emitted from the sun, reflecting off the surface of an object you call tree, enter your eye and cause electro-chemical transformations in the cells of the retina. If a tree actually came through your eyes, you would be in big trouble. The retinal patterns are transmitted to more than 30 different centres in the brain and through some mystery that no one as yet understands, a tree is 'seen' in front. When you touch the ground, you are registering temperature and pressure. How does touch turn into ground? We are so used to sensing the world around and within us that we hardly give it a thought. Our experience is a co-operative working of all the senses, plus memory and associations, and what it is 'out there'. With each moment of sensing we are, as biologists Humberto Maturana and Francisco Varela eloquently put it, "bringing forth a world" Sensing is a creative act. In fact it is the most intimate expression of our uniqueness. The world I bring forth is different from the world you bring forth. These thoughts might stimulate some new questioning. As you investigate touch today and continue with your explorations of taste and hearing, bear these ideas in mind and see if they lead to something new. We have a big day today as we begin to explore touch. Actually you are going to have a facial and beauty treatment! You will be working in pairs and because the process is so involved, one person will be done today and then the other will be done tomorrow. Anatomically speaking, to talk about ‘the’ sense of touch is a bit misleading. There are many different organs for touch which weave together the textures and sensations of a body. There are sensors for pressure, temperature, pain, and fine texture. Most of these are involved in registering change and movement. There are also sensors in the joints that register the position and angles of the limbs when they are not moving. There are sensors at the base of each hair so that the slightest breeze can be noticed. A huge number of sensors for touch are in our skin but there are also internal sensors which register the state of our organs. The face is a locus for all our senses. Eyes, ears, nose, tongue and skin; all clustered together on this boney ball and many of them facing in roughly the same direction. Our face is the most exposed part of our body. We rarely clothe it. Although today we are going to work mainly on the face, I think you will find that by relaxing this area, your whole body will also relax. Let’s have a half hour break to get the things together that we will need. Bring your cushions and mats and chose a partner so that you have someone to work with. **Session on Touch** **A note for teachers or instructors.** *This is the most complex day and you will probably need someone to help make sure that the hot water and ice water is ready when you need it. It is a good idea to describe the whole process before beginning in addition to giving ongoing guidance throughout the session.* With the exception of looking at our tongues through the magnifying glass, this is the first time we have worked in pairs. Inevitably this will invite some talking but I urge you, and I will remind you as we go, to try to keep the talking to a minimum. This is a unique opportunity for practising awareness both for the person receiving the treatment and the person giving it. See if you can communicate through your hands; through touch, rather than having to rely so much on voice. If you are doing this work outside, which is most ideal as it is messy, lie so that your face is in the shade. It does take some time and you don’t want to get sunburnt. Also you will want to have a towel under the person’s head to keep the water and clay off the pillow. It’s a good idea for the person receiving the treatment to wear a shirt with a low neck so that when we apply the clay you don’t get it all over your shirt. Finally, if the person receiving the treatment suffers from lumbar problems, it helps to place a large pillow or bolster under the knees. This will relieve pressure on their back. 1 - Removing dead skin First, the person who is going to be worked on should be given a piece of loofa. These are dried sea cucumbers that are used in the bath. They can usually be bought from a pharmacy. You can cut them into smaller bits for this exercise. Wet the loofa and your face and then gently scrub your own face. The loofa will be quite scratchy and will help remove the surface layer of loose dead skin. Your face will likely get a bit red. 2 - Hot and cold compresses To facilitate this, we usually have two pairs of people working closely enough together that the ones who are giving the treatment can share the same buckets of hot and cold water. Have the hot as hot as you possibly can. The ice should have just enough water to cover it. Have two face cloths in each bucket and when you have finished using the cloth, return it to the bucket it just came from. This will help to preserve the temperatures. Begin with the hot. You might want a fork to get the cloth out of the bucket. Wring it fairly dry and then quickly lay it over your partners face keeping a little channel open for the nose for breathing. Ideally you’ll want this towel to be as hot as the person can stand. They can tell you if it is too hot or not hot enough. If you gently press the towel over the forehead and eyes, this will increase the heat and help open the pores and relax the muscles. When the towel is no longer hot, quickly take it off and replace it with a quite wet, ice cold one. Leave this on until it no longer feels cold. Then quickly change to the hot. The alternation of hot and cold will bring a deep relaxing into the face. If the hot bucket cools the helper should have boiling water on hand to top up the bucket. Keep this up until the water in the bucket is no longer hot. 3 - Massage with Camomile Oil Place a saucer of camomile oil between each pair of masseurs. They can share this. Now using the oil we will begin to massage the face. When doing this in a group, there is inevitably a wide range of experience in giving massage. In order to help the people who are a bit uncertain, and also to help with the pacing, I verbally guide them through the process. We usually take between half an hour to forty-five minutes for this part. It is impossible to give a written description here of what to do because it varies dependent on the people. I’ll simply give an outline and you will have to experiment. - general over all face and head - 5 min. - throat, neck and jaw - 5 min. - jaw and cheekbones - 5 min. - nose, upper lip and lower eye socket - 5 min. - forehead - 5 min. - forehead and temples - 5 min - ask if there is any place that needs more work - 5 min. - general overall - 5 min. Finish by cradling your partner’s head in your two hands. Look at this person you have been massaging, not only with your eyes but with your hands and your understanding. Consider the vast stream of teachers that have inspired this being in so many marvellous ways. An ocean of support, an unfolding stream of wisdom. See too how they themselves have inspired others. Look deeply into this person; this being you are cradling. Contained within them are their parents and their parent’s parents, ancestors going back, time out of mind. Contained within them are their children and their children’s children; an inconceivable array of talents and capabilities. Open yourself to all the loving, the caring, the compassion and tears that have shaped this being. Open your heart of compassion for the unimaginable suffering of countless beings walking this journey of life, this being, this miracle cradled in your hands. Breathe and contemplate for a few moments. © Tarchin Hearn www.greendharmatreasury.org Feel this head and know that it’s very substance is made of star dust. The liquids in this body have once fallen as rain and flowed as rivers. This being is an interbeing of an entire living, awakening, planet. Breathe and feel the miracle before you. Then, gently, with care and wonderment, allow your hands to express all your support and good wishes for them; that they may grow and flower in love and compassion and wondrous clear seeing for the benefit of all the beings that they are. Finally, very slowly, very gently, remove your hands and let go. 4 - Hot and Cold Compresses As the above guided meditation is being given, the helper is preparing more hot and ice water for a second round of compresses. Repeat part 2 with the hot and cold compresses. This time it will be a little shorter. Don’t refresh the hot water. Once it is luke warm, you are finished. At this point you may want to have a silent break for a stretch and a toilet stop. 5 - Mud Pack Between each pair provide a bowl of clay. It should have been pre-mixed so that it is the consistency of Devon cream or thick shaving cream. Usually I ask for someone to allow me to demonstrate on them so that the others can see what to do. We use our hands to apply the clay with long even strokes beginning at the throat and working toward the forehead. Ideally the clay should go on with an even thickness. Thick enough to cover and thin enough to be able to dry. Cover the entire face, except for the eyes, the nostrils and the lips. (Be careful not to get clay on the eye lashes.) Bring the clay right into the hairline. (Instructor: Make sure you have applied the clay enough times so that you can understand what is required and the difficulties that can emerge.) Now the person lies there until the mud pack has dried hard. Try not to move your face, at least in the early stages. At this point the person who was giving the treatment can leave and get on with their own meditations. I stay around and tell people when the mask is dry. If it is sunny and there is a bit of wind and the clay is not too thick, it can dry in 45 minutes to an hour. I usually suggest that the person with the clay mask not look into a mirror. This segment is about touch, not sight. Your inner touch is for knowledge. The mirror is just for the ego. Once the clay has dried hard, sit up and try moving your face. See how much of the mask you can crack without using your hands. This is usually quite interesting as it shows something about which parts of your face are mobile and which parts are not. If you don't have very much face hair, the clay will come off quite easily. Surprisingly, women often have the most difficulty as they can have very fine down like hairs on their cheeks which the clay will tend to stick to. After you have peeled and flaked off as much as you can, if you need further help, use lots of cold water to wash the rest off. 6 - Cold Water Rinse Once the clay has been removed, rinse your face in cool water. 7 - Self Oil (optional) To finish off, if your skin feels too dry, you can put on a little of the camomile oil, just enough to moisten the skin. There ...... you're a new born being! Afterwards As you continue through the day try enhancing your practice with the following explorations: - Use your magnifying glass to examine your hands and your feet. Have a closer look at these two major touching organs. - With closed eyes, explore various objects with your hands. Do this with great sensitivity. Be aware of your breathing and your whole body resting easeful, soft and alert. Where does the touch take place? In your hands? In your mind? Throughout your body? How is the touching affected by your expectations? Your hopes? Your fears? - Try walking at night in the dark without using a torch. Let the sensitivity in your feet and the overall awareness of your body be your eyes. - Try walking meditation in the day but do it walking backwards. • In your forest walking, in addition to the 4 basics, try bringing together taste, hearing and touch so that there is awareness of all three simultaneously. • Consider that every moment of sensing is bringing forth a world. Think about this for a bit and then let go of the thinking and simply experience. • When you are walking, what is happening for you? Are you moving through an environment that you sense? Or, is there simply the arising and passing of a continuum of sensing, from which is constructed a ‘world out there’? • Finally, explore the experience that everything you touch is simultaneously touching you. Taking this further explore the possibility that everything you sense is also sensing you. We are now 5 days into the retreat. Although the mornings are very full with the physical cleansing work, hopefully you are finding time in the rest of the day to sit quietly outside and settle more and more into your meditation. Use awareness of your breathing as a support for staying present. Smiling, breathing, present; all the senses opening into a space of vivid awakeness. Allow any sense of boundary or separation between yourself and other to soften and transparentize. Gradually you will come to a deepening absorption or samādhi – the entire miracle of being, functioning without effort or obstruction. In this state begin to observe more closely the interconnectedness of everything. Physiology and anatomy are shaping feelings, emotions and thought patterns. The mental phenomena are simultaneously shaping physical phenomena. Outer supports inner and inner supports outer. The micro realms depend on the macro realms and the macro depends on the micro. Everything you observe reveals itself to be dependent on other things. The entire universe is a dynamic interbecoming. To help deepen this understanding I’ve included here two contemplations. The first is taken from Daily Pūja and the second is from a sādhana practice of Avalokiteśvara.\(^5\) Read through them slowly, pausing frequently to allow individual words or phrases to resonate in your experience. The words are hints carrying us into an enlarged space of being/understanding. **The Interbeing of the Body** – from Daily Pūja This body of mine is composed of atoms born in stars, molecules, cells, tissues and organs. It is a union of uncountable viruses, bacteria, fungi, plants and animals. It is conditioned by families, and societies, by thoughts and dreams. \(^5\)Avalokiteśvara or Chenrezi (in Tibetan) is the bodhisattva of compassion. This sādhana, or practice, is a path of awakening through realising the inseparableness of wisdom and compassion. It is moulded by sun and gravity and the whole of the eco-sphere. It is an interbeing of all these processes from micro to macro, Wondrous, transient, May it teach me wisdom. **The Interbeing of Everything** – from the Sādana of Chenrezi Right now, in the very midst of your current experience, contemplate the essential interbeingness of everything. Recognize how each aspect of your existence; body, speech and mind; inner and outer; micro and macro; is interweaving with everything else in the universe. Nothing stands independently on its own. Everything is created, sustained and supported by everything else. All arisings are mutually shaping. With this understanding, where is the ongoing 'me' that, so often, seems apart from the rest of the universe? The sense of a separate self is seen as empty and illusory, as awareness opens to the fullness of the present moment. One feels clear, relaxed, and vitally awake. Breathe with this for a while. All forms, sounds and thoughts are like the wind blowing in space, emptiness moving in emptiness – spacious openness intermingling with spacious openness. If opening all the senses, or engaging in these contemplations of interbeing cause your attentiveness to run off into verbal speculation, thinking, reminiscing, planning, story making and so forth, even if it seems like 'profound' speculation, then recognise this is happening and simplify your meditation. Come back to smiling and awareness of your breathing and allowing your body to relax deeply, without trying to get anywhere or to understand anything. Just breathe; feel your body resting on the earth and make friends with whatever arises. A huge dollop of down-to-earthness is needed for retreat work and particularly for this very direct working with the senses. Here I am, happily suggesting all sorts of interesting explorations into the nature of sensing and the nature of what is, while at the same time I do recognise that you have come to this retreat with a flow of interests and involvements that are already happening. We're talking about your life. It's not reasonable or even desirable to expect that you will completely drop all your ongoing interests and involvements for the duration of this week. These meditations and the physical work on the senses can be extraordinarily evocative. All sorts of stuff could come up for you and if it does . . . wonderful! This is a perfect opportunity to explore them and find ways of integrating these energies into your overall life. Healing has its own timing so if now is the time when emotions or memories or unresolved stuff from the past is going to come up, then see if you can explore these arisings with lovingkindness and interest. Sometimes nothing particular seems to be arising but you find your attention wandering, or a dullness setting in, or an inexplicable agitation taking over. You might get lost in story telling, fantasy, planning or reminiscing. Whatever arises, if it is taking you away from the meditation, first of all, without criticising yourself, simply note that this is occurring. Then, on the inhalation, mentally name the feeling or emotion or quality of mind that is present and on the exhalation, think "I'm here for you". Breathe this way, again and again, giving yourself permission and encouragement to be with this difficult energy with a quality of kindness/acceptance coupled with curiosity/interest/investigation. Bringing kindness and interest to whatever is arising *is* lovingkindness in action. Here we are not trying to get rid of something or to fix it but to be with it just as it is, with compassion, forgiveness and deepening understanding. This is a most radical and direct form of healing. It is beyond the scope of any booklet to identify and go into all the things that could arise. Let's just acknowledge that stuff comes up! Great! Paint it in a mandala. Walk with it. Breathe with it. Make friends with it. Forgive it. Look deeply into it. Allow a new, non-conceptual understanding of it. Today we are still working with touch. Getting in touch. Keeping in touch. Touching deeply. Deeply touching. Open your heart. Feel the love and wisdom that is ripening. Today we will be doing basically the same program as yesterday only with the partners reversed. The person who received the treatment yesterday will give it today. Let's have a half hour break and then we will meet at the same place we did yesterday and play with the mud! For many people, sight is their strongest sense, so much so that to say, "I see", is to mean I understand. More than any other sense, objects of sight seem to be out there, separate from ourselves. We can see stars that are light years away. The things we see we can't necessarily touch, smell, taste or hear. This sense of separation is very convincing and highly misleading. According to current research, there are more than 30 different centres in the brain associated with seeing.\(^6\) Some register horizontal lines, others vertical, some are tied up with human face recognition and some with colour, some are involved with tracking the visual object and keeping it in focus. Nowhere in the brain has there been found a centre that links all these areas into a single co-ordinated picture; a centre that we might be tempted to call 'self'. To see something, even something that is far away, requires a simultaneous dance of happenings right here, inside our heads. It's awesome to contemplate the vast number of factors that weave together this miracle of seeing. Each moment of our experience is an interbeing, or more dynamically an interbecoming, of myriad factors – myriad moments. To have direct understanding/experience of this is to approach the meaning of the Buddhist concept of *śūnyatā*. Though most often translated as emptiness, in the context of our work this week, *śūnyatā* could be more usefully thought of as the sense of spacious openness which naturally arises with a deepening appreciation of interbeing. Contemplating what are called the 18 *dhātu* or sense elements is a profound way of directly entering into this experience. Fortunately it is not as complicated as it sounds. The 18 *dhātu* can be divided into six groups of three. The five sense doors plus the mind door make up the six senses. Each sense is contemplated in three aspects. Taking sight as an example, the first aspect is the physical sensing apparatus or equipment. This would include the eyeball and the \(^6\) This is according to Professor Susan Greenfield in a BBC documentary video "Brain Story" and also Antonio Damasio in "The Feeling of What Happens" p134 muscles supporting its movements along with the neurons and various brain centres associated with seeing. The second aspect is the object of sight, for example a tree, a person or whatever it is we are looking at. The third aspect is a dynamic process involving both subject and object appearing as the conscious knowing of the object. To do this meditation, first of all take a little time to sit comfortably and settle into an awareness of breathing. Then become aware of a visual object in front of you, any object will do. With a small amount of thinking, consider how the three dhātu, sense organ, object and the knowing, all need to be present in order to have an experience of seeing. If any one of them is missing or not functioning properly then, for you, there is no seeing. Without an object, there is no seeing. Without the apparatus, there is no seeing. Without the awareness or knowing, there is no sense of seeing. Examine your experience until you are certain that all three need to be present. Next, with the intellectual confidence that all three need to be present for seeing, gently raise the question; ‘Where is the seeing taking place?’ and then sit in the direct experiential response. If the tree is out there, how does it get into your brain? If the tree is simply arising in your neurons, then how does it appear to be ‘out there’. Look down at your body, the one carrying the eyes that are doing the looking. Here is the body. There is the tree. Where is the knowing? You might come to the conviction that it is all unknowable, that there isn’t really anything there, and yet, there is this current experience. You might come to feel that the object, and you the subject, cannot be so clearly defined in terms of where one ends off and the other begins. In this un-pin-down-able-ness, your experience can feel very open. This type of exploration can continue for many months or years until the very texture of your life flows with a greater sense of spacious possibility. Every moment of sensing becomes an open dimension of experience – a wondrous interbecoming that has been evolving to this moment from the very no beginning, rather than a simple duality of knower and known. These are profound contemplations so don’t worry if you occasionally get lost. If they are intriguing and filling you with interest then give them a go. If they seem to be confusing, then continue cleansing the senses and engaging in simple and direct mindfulness. Each person’s unfolding has its own pace. Session on Sight This exploration begins with a number of exercises. 1 - Opening and Closing Stand outside and look into the forest. Open your eyes as wide as you can as if you have just had a great shock. If you really get into this, you will find that your mouth will probably open as well. Hold the expression of shock and surprise for as long as you can and then, squeeze your eyelids together as if you were trying to shut out some horrible or threatening experience. Here you’ll probably find your mouth closing and your chin tucking in like a turtle withdrawing its head. Hold this expression for as long as you can and then go back to the first expression with the eyes wide open. Alternate back and forth between these two for a while until you’ve had enough. (Try to keep it going for at least 5 minutes.) Your eyes may begin to tear and you may contact various old feelings or memories. It comes as a surprise to many people, just how much they control their feelings by controlling the mobility of their eyes. Whatever arises, just stay with it in a gentle, non-forcing manner and explore. 2 - Eye Dance Sit in an upright posture and gently rest your gaze on an object directly in front of you. With a great deal of awareness, slowly and sensitively turn your head to the right, all the time keeping your gaze on the object. When you have gone as far as you can without straining, or losing sight of the object, pause there a moment and then slowly move the head back towards the left as far as it will go, still keeping the eyes focused on the object in front. Go back and forth, right to left and left to right, exploring the texture of the movement. It is not unusual to find jerky bits or sticky places. Try moving even slower and at the same time become aware of your breathing. With a bit of practice you might discover a wonderful fluidity, not just in your neck and head but flowing down into the rest of your body. After experimenting with this for a few times, pause for a rest. Then, with the same quality of sensitive awareness, try raising your head and lowering it, again keeping your eyes on the same object. Finally try rolling your head in a circle while keeping your gaze on the object. Then rest. Now try moving your eyes while keeping your head still. Look to the right and then to the left. Go back and forth a few times. Raise your eyes and lower them. Explore these movements a few times. Now experiment with rolling your eyes in large circles. Do all these movements in a slow and gentle manner. Finally, allow yourself to appear a bit impish. Move your eyes more quickly. Look up to the right. Up to the left. Down to the right. Off to the side and so forth. Continue in a spontaneous and random fashion. You are the mischievous monkey figure in the Balinese dance. Play with these movements for a while and then rest. You might find a lightness of seeing, a softness and pleasure, glowing through your being. 3 - General Massage A good way to do this is by sitting with your elbows resting on a table. Now, supporting your head comfortably on your finger tips, begin to massage your forehead and temples. Do this quite firmly but not too quickly. Take your time. After a while move down to the area just under the cheekbones and work your way out towards the place where your jaw hinges near your ears. Pull the flesh of your cheeks and massage along the jaw. Push firmly through the flesh and massage the gums. Give your lips a work out. Move your ears. Now using your fingertips, massage your head upwards towards the crown. Finish this general massage by again working over your whole face. 4 - Eye Massage Once you have explored your face and head in the general fashion described above, then you can begin to work more intensively around your eyes. Place your thumbs in the upper corner of your eyes near your nose. It helps to rest your forefingers against your eyebrows. In this position you can press up with your thumbs while squeezing down with your fingers. It may feel as if you were squeezing your eyebrows whilst pressing up against the bone of the eye socket with your thumbs. If you can relax, your eyeball will move a bit to allow your thumb or finger to go more deeply into the boney socket. Do this with a great deal of sensitivity. It should be firm but not painful. Push the thumbs in for a moment and perhaps experiment with wiggling them a bit. Then gently release the pressure and move the thumbs a little along the boney ridge of the upper eye socket towards the outside of the eyes. Gently press again. Keep going like this until you get to the far outside of the eyes, then change your hand position so that your forefinger is now on the inside of the lower eye socket and your thumb is just under your cheekbone. Now you can continue to work back in along the lower part of the eye socket until you are at the point just below where you began. Finish off by once again massaging your eyebrows, forehead and your cheekbones. 4 - Hot and Cold Compresses Arrange to have two large bowls, one filled with hot water and the other with ice water. Have two face clothes, one in each bowl. Lying comfortably on your back, take the hot face cloth and wring it nearly dry. Then lay it across your eyes, forehead and temple. Gently press it down with the palms of your hands. Be sure to leave room so that you can easily breathe through your nose. Leave it for a few moments until it begins to cool and then replace it with an ice cold cloth. This one can be a lot wetter. Leave it there until it no longer feels cold. Continue to alternate back and forth, hot to cold to hot and so forth, and allow your face to completely relax. 5 - Bathing in Blue Light Sitting or lying down, cover your eyes with the palms of your hands. Try to do this so that even though your eyes are open they are seeing only darkness. Imagine a source of lapis lazuli blue light, far away in the distance. Feel this blue light streaming into your eyes and filling your head with soft, deep-space blue radiance. With each inhalation imagine the light streaming in and bathing your entire being. On each exhalation allow your body to soften and relax. Breathe like this for ten or fifteen minutes. 6 - Sunning Sit or lie in a position where you can see the sun. Close your eyes and feel the warmth bathing your eyes. Slowly turn your head back and forth right to left and left to right and feel the sun warming all the parts of your eyes. Do this a number of times. While doing these movements, MAKE SURE YOUR EYES ARE CLOSED. Then explore moving your head up and down. Finally try rolling your head in a circle all the time feeling the play of warmth on your eyelids. 7 - Saline bath Using an eyedropper, flush your eyes with saline solution. This is made with sea salt and pure water. It should be roughly the saltiness of tears and should feel quite soothing. NOTE: For some people this is as far as they will want to go. The rest of the process with the lemon and the chilli is entirely optional. 8 - Lemon flush To help bring more blood and oxygen etc. to the eyes, place a few drops of lemon water into each eye. This will cleanse the surface of the eye. (See the appendix for further instruction) Since some people are a bit nervous about doing this, I usually demonstrate on myself after giving out the instructions. 9 - Chillies It is important to explain very clearly to everyone how we do this. Place some fresh hot chillies or some fresh cayenne pepper on a plate. Rub the chillies between your fingers for a few moments, or if you are using cayenne, dab a bit on your finger. Then using a paper towel, wipe your finger completely clean. Make sure there are no particles of chilli on your finger. All you’ll have is a residue of chilli oil. Then, sit down, and wipe your finger along the inside of the lower eye lid and then quickly do the other eyelid. This will usually cause a strong reaction as any of you who have inadvertently done this in the kitchen will know. The length of time for the reaction depends on how much you get in your eyes. It often goes for between 5 and 10 minutes. If it is really intolerable, you can get someone else to flush dropper after dropper of saline into your eyes. In the many times I have done the chillis with beings though, I have very rarely had to resort to flushing with saline. 10 - Saline solution Once the chilli reaction is completely finished, flush your eyes again with saline. 11 - More blue light Finish off with more palming and bathing in the imagined blue light as in section 5. Afterwards During the day, along with all your other explorations, you can try alternating between ‘sunning’ and ‘bathing in blue light’. This alternation will encourage deep relaxation and a surprising degree of healing. Today is going to be a bit on the nose for some of you. Smell is a greatly neglected sense. I knew a person who claimed that he didn’t have any sense of smell and yet he could cook! Talking with him made me realise how difficult it is to describe to another what we are sensing. Smell is perhaps our oldest and from a certain point of view, our most straightforward sense. The olfactory bulb is composed of neurons coming directly out of the brain. Molecules of substance wafting up the nose stick on the brain and voilà . . . memories, associations, emotions, sex, defence! There is no specialised nerve ending, no eyeball, ear, tongue, or touch sensors, just direct molecule to brain knowing! A very interesting book by Lyall Watson is called “Jacobson’s Organ” It is all about smell and especially the smells that we are completely unaware of. Unbelievable as it may seem, there is today a considerable amount of research going into what could be thought of as a sixth sense. It’s called Jacobson’s Organ and it is located high up in our nasal passages. It is an entirely different structure from the olfactory bulbs and the evidence points towards the likelihood that these organs are for detecting pheromones. Pheromones have long been recognised as powerful communication substances, for example, moths use them to detect sexual mates even at great distances. The idea that humans are moving through a fine chemical smell-scape of intimate emotional perception is something that few are aware of. Non-human animals are obviously tuned to each other’s inner cycles, especially sexual cycles. Why is it so surprising that humans are just as tuned? I suspect it’s because we don’t want to admit our animalness. For most people, the idea of losing their sense of sight is horrific, but think how blind you would be with no sense of smell. No danger alerting sense; gas leaks, pots burning on the stove, something wrong with the car. Living in cities with air filled with petrochemicals has probably damaged most people’s sense of smell to the point where they are happy not to smell anything. Then there is the perfume industry; an industry dedicated to hiding our natural messages, often coming from armpits and crotch, covering them over with sex hormones taken from other animals. I don’t know how people can stand it! Walking down the street I can smell a person coming along the sidewalk 20 metres before they get to me. They walk in a cloud of manufactured perfume that is so strong I doubt that they can smell any of the world they are moving through. Smell is a mysterious and evocative sense. Nerves for smelling connect with many different areas of the brain hence triggering an immense cascade of associations and memories. After we have finished our work this morning, I’d like you all to go for a smell walk. Be like a dog for the rest of the day. Explore the smell-scape. Abandon the use of perfumes for the week and find out who you really are and what the others around you are on about. Beautiful friendly words and smile coming through a cloud of fear-smell. Withdrawn looking introverted type surrounded by an inviting aura. What are you going to believe? In Zen, there is much talk about the essence of mind. It seems strangely appropriate that essence, which is a word associated with smell, “essential oils” etc. is used by meditators when trying to describe that most elusive of elusives – mind. It’s possible that our sense of smell is much more informative that we think. It may be that much of our intuition is triggered by unconscious smells giving rise to buried associations. Human smelling abilities are often compared with dogs, with humans coming out so poorly that we might think we could do without the sense all together, however, I think that we have much greater abilities that most would ever imagine. Consider the possibility of knowing another’s mental/emotional state through smell. The idea that we could know the essence of another’s mind isn’t so far fetched. Who (k)nose? **Session on Smell** For many of you, this will be the most intimate work of the week. It will also help break a few taboos . . . like picking your nose in public! Each person will need to have a large amount of toilet paper. I usually do the process on myself as I give instruction to the people. This way I remind myself of where special sensitivity is needed and I have a better idea as to how long to spend at each stage. More than any of the other sense clearings, this one is impossible to describe in written form beyond a general outline. If you are working on your own, do it with a tremendous amount of care and sensitivity. The whole exercise should be approached as a continuous experiment, a very gentle exploration. If you are guiding others, make sure you have already done it a number of times yourself. 1 - General Massage Begin with a general massage around the cheeks and forehead, gradually working in towards the outer part of the nose. Take five to ten minutes for this and use some camomile oil if you wish. 2a - Nose Massage Use a very small amount of oil and gently begin to explore the entrances of the nostrils. Get them used to being worked. Part of what you are doing at this stage is stretching the tissues. Go very slowly and sensitively. Take about 5 minutes or more with the soft tissue on the insides and the outsides of the nose. 2b - Nose Massage – going deeper Now, using your little finger (make sure the nail is trimmed and filed right down), begin to slide it up your nose. It will help to look at an anatomy text to remind yourself of the shape of the nasal passages. I often use my right hand to go up my left nostril. This is so the flat of the nail lies along the cartilage dividing the nasal passages. Go in a bit and then, supporting your head with your other hand, pause there breathing through your mouth. On the exhalations, you might be able to slide in a micro amount further. I usually suggest working one side for a bit and then going to the other side and then back to the first. Alternating sides gives the tissues a chance to rest. The nasal tissues are not used to being physically touched so be very gentle. After you have explored both sides sufficiently and have gone in as far as you are going to go, then very gradually retrace your steps and massage your way out to the exterior. (You may find your nose begins to run with lots of mucus, hence the need for toilet paper beside you.) 3 - Saline bath Fill a drinking glass with warm saline solution. About the same saltiness as you used for your eyes. I find this easier to do standing. Tilt your head to one side and close off the upper nostril while holding the glass to your lower nostril. Inhale and suck up the saline solution and let it come out in your mouth. Spit out the water. Alternate nostrils until the solution is finished. If you have one, a ‘neti pot’ can make this easier to do. 4 - Meditation Sit in meditation for a while. Afterwards While you are very focused, go for a smell walk. Smell the earth, the leaves, the flowers, your clothes. Smell your food, your room, all the objects that you meet. Notice the subtle shifts in smell as the temperature changes throughout the day. Explore how the reaction to smells involves your whole body as well as your mental processes; memories, associations, evaluations and so forth. Open yourself to the possibility of knowing much more of what is going on around you than you usually credit. Have a good day! The last sense is the mind and from a certain point of view, cleansing the mind so that it can function well, is what most Buddhist practice is about. From another point of view, mind is ‘that which knows’ and that which knows is much more that just your brain. It encompasses everything that is involved in the process of knowing. As we discovered working with the 18 dhātu, the subject, the object and the interaction are all parts of the knowing. In a way, that which knows is the entire of being and the fact of being doesn’t actually need cleansing. It’s our limited attitudes and negative thought patterns that could do with a washing but they are aspects of mind, not mind itself. The most direct approach to cleansing the mind door is to live in a way that will encourage open, responsive, compassionate, awake, investigative, presence in everything that we do. In many spiritual traditions the five senses are often neglected or tacitly thought of as unimportant as we study the mind or spirit. This is surely a huge mistake. Our senses are our gateways of interaction with the world. The way they function, in a way, defines who and what we are. This is a marvellous study and the exercises we’ve explored this week are just a small starting point for a vast and engaging journey. Through our senses we meet with others, hence others are fabulously important. We have evolved to meet them! Through our senses, we mutually understand and shape each other. The fundamental equipment, we already have . . . our bodies! The only extra ingredient that would help on the way is a sense of great compassion for the world, for all beings, for ‘others’, and a huge interest/curiosity to lend energy to our meetings. Children have this natural curiosity. They poke their fingers into all their openings, wondering what’s in there and marvelling at the feelings and sensations that arise. They are often made to feel ashamed for doing so. Senses are inevitably sensual, so if for you sensuality appears as threatening, then as a defence, you may have become very intellectual or at least addicted to mental fantasy and story making; anything to avoid direct contact. The path of awakening is actually very sensible. It is closer than hands and feet. To rest easily in the functioning sensual reality of our bodies is a first step towards feeling at home everywhere. When a person has been unconscious for some time and they wake up we say they have come to their senses. May we all wake up and come to our senses for the sake of everything and everyone, everywhere. **Appendix** **Daily Pūja** You might use "Daily Pūja" by Tarchin Hearn. Published by Wangapeka Books. You can download it here: [Daily Puja](#) In the introduction it says: "This booklet is a collection of reflections or mini-contemplations inspired or taken directly from the Buddhist tradition. They are presented in a way that will speak to the universal nature of everyone, regardless of their religious beliefs. Pūja means to honour or to venerate. With these contemplations we honour the mystery of life and refresh our intention to live in a sane, healthy and compassionate manner." Further on in the commentary it says: "The contemplations in this booklet are to awaken question and to reconnect us with some of the more meaningful facets of life. They call us to examine our aspiration, the way we live, the nature of our body, our relationship with death, our potential for love and the quality of our ongoing daily awareness. They are not a collection of religious dogmas one must take on or believe, in order to awaken. Instead, think of them as a set of themes to be explored and contemplated again and again as one's insight and experience deepens through the years. There are many ways to work with them. Perhaps more important though, is to allow these themes to work on us." To begin each day with contemplations that open you to the larger story of awakening adds something immensely valuable to this work of exploring the senses. You may come from a different religious tradition with your own forms of reflection and worship. Feel free to adapt this part and use whatever morning reflections seem to be most supportive to you at this stage in the journey. © Tarchin Hearn [www.greendharmatreasury.org](http://www.greendharmatreasury.org) Daily Self Massage You can do this through your clothes. Try to involve every part of the body you can reach and give particular attention to your hands and feet, face and head. When you have finished, spend some time meditating outdoors. Breathing Meditation - Anapanasati There is much more detailed instruction on breathing meditation in my books: "Breathing: The Natural Way to Meditate" and "The Cycle of Samatha" Walking For more extensive instruction, see my booklet "Walking in Wisdom". In the mean time, here are a few basic ideas. When you are walking, first of all become familiar and proficient with the four basic supports. These are 1) smiling, 2) breathing, 3) physical awareness and 4) awareness that you are walking on countless living beings or that with each step you are being supported by countless living beings. If you lose all four supports, smiling breathing etc., then stop walking and feel your body standing on the earth and especially your feet upon the ground. Re-contact the four supports and then continue walking. You may not be able to attend to all four at the same time but as long as you have at least one of them clearly in focus, you are still deepening the mindfulness through walking. Every once in a while, even if you have good focus, stop walking and open all your senses to what is going on immediately around you. Without conceptualising, bathe in the beauty of nature for a while. Then re-contacting the four supports, continue walking. During the walk try emphasising awareness of the sense you have been working on that day. Once you can walk with the four basics then try intensifying your awareness of all five senses, giving equal attention to each sense. You may have a sense of being a sphere of all-round sensing – a transforming space of knowing. Walk with the awareness that every step you take changes the world irrevocably. Explore the possibility that everything you see is seeing you. Everything you hear is hearing you. In short that everything you sense can in some way sense you. **Creating Mandalas (of the inner sensations and experiences)** Allow yourself to be very creative with this. If a strong image, or emotion, or overall feeling or sensation invites you to further exploration then try painting or creating something while you are in this state. Don’t get too intellectual but allow your intuition to guide your choice of colour and form. Let the mandala unfold itself. If you find yourself working with particularly difficult states such as pain or fear and so forth, I suggest you begin your mandala work by drawing a light circle on your page about the size of a dinner plate. Then keep whatever you draw within the circle. This keeps the exploration within a boundary which often feels a bit more safe. It also speaks of completeness. When you have finished a mandala, have a break. After a while, come back and sit with it. How does it affect you now. Is the feeling generated in your being different than when you painted it? What is this overall sensation? Can you name it? This is a huge exploration in its own right so I hope this is sufficient to get you started. **Equipment** Each sense exploration requires some equipment. The instructor will provide what is being used by the group in general and the individuals will bring things they use personally. Some things need to be prepared a few weeks in advance. **Taste** *Instructor should provide:* - Baking soda, - Sea salt, Lemons, - Lots of non-chlorinated drinking water, - Knife to cut the lemons - Vicco - Vajradanti tooth powder (strong) This is usually available at a pharmacy or health food store that carries Ayurvedic Medicines. *Each participant should bring:* - Magnifying glass - toothbrush - a glass or cup - towel to mop up the dribbling **Hearing** **Instructor should provide:** - corn oil - If you can't find corn oil, use the camomile oil prepared for the sense of touch. - Vitamin E oil plus Eucalyptus oil. Mix a few drops of Eucalyptus oil with the Vitamin E oil to give it some heat. - Hydrogen Peroxide - 3% solution, - eye droppers - toilet paper - cotton buds - Lemon water for final rinsing **Each participant should bring:** - towel - cushions and mat to lie on **Touch** **Instructor should provide:** - Loofas for removing dead skin - One medium size face cloth for each person. Sometimes we cut up old towels for this. **Camomile oil** This oil needs to be prepared at least two weeks before the course. To make it, fill a jar, (size depends on how much you need) with dried camomile flowers. Any good quality loose camomile tea will do. Then pour some best quality, cold pressed, extra virgin olive oil over the camomile until the flowers are completely covered. I usually warm the oil a bit before pouring it onto the flowers. Cover the jar with cloth to allow breathing and to keep the dust out. Then store it in a dark place for two weeks. Strain the oil through a sieve to remove most of the flowers and then strain it again through clean loose cotton to remove any fine particles. At this point you should have a very fine oil which smells richly of camomile and amazingly, has become of finer texture than the original olive oil. - One bucket for two people - Fine pottery clay, mixed to a consistency of Devon cream - Lots of crushed ice - Lots of hot water Each participant should bring: - towels - mats and cushions - blanket or sleeping bag in case you get cold **Sight** **Instructor should provide:** - hot water - ice water - saline solution Use sea salt and pure water. Add the salt until it tastes something close to sweat or tears, then try it out on your own eyes. It should feel cool and on the soothing side. - lemon water - Place a small number of drops of fresh lemon in a glass of pure water. It takes surprisingly few drops. Try it out on your own eyes first before giving it to others. It should feel slightly astringent. It will probably cause you to close your eyes but it shouldn’t be very uncomfortable. - cayenne pepper or fresh hot chillies - a number of clean eyedroppers Keep one eyedropper in the salt water and one eyedropper in the lemon water. Don’t mix them up. When you want soothing saline, you don’t want residues of lemon. - lots of toilet paper **Smell** **Instructor should provide:** - salt water - toilet paper - camomile oil Each participant should bring - one drinking glass **Suggested Reading** "Jacobson’s Organ" by Lyall Watson Penguin Books/2000 "The Spell of the Sensuous" by David Abram, Vintage Books/1997 "The Feeling of What Happens" by Antonio Damasio, Vintage Books/2000 **Books & Booklets by Tarchin** *Published by Wangapeka Books – available from www.greendharmatreasury.org* Growth and Unfolding - Our Human Birthright Breathing - The Natural Way to Meditate Natural Awakening - The Way of the Heart Walking in Wisdom Something Beautiful for the World True Refuge Common Sense Retreat Meditative First Aid Daily Puja The Cycle of Samatha Sangha Work Foundations of Mindfulness **About Tarchin** Tarchin Hearn, a self described ‘yogi of the natural world’, is a widely respected teacher and practitioner of Contemplative Science and Natural Awakening. He has taught in many countries and helped establish a number of centres for retreat and healing. His work, rooted in Buddhist principles, links personal and communal healing with a deep ecological perspective in ways that have inspired a wide range of people from a variety of diverse backgrounds and traditions. Tarchin lives in New Zealand with his partner and long time companion, Mary Jenkins. © Tarchin Hearn www.greendharmatreasury.org Dedication Through the power of these wholesome explorations May our lives be rich with awakening. Living thus, may we abandon all unwholesomeness. Through the endless journey of birth, illness old-age and death, May we help all beings to realise their true inter-being nature. SARVA MANGALAM All is blessing!
ADVANCED TREATMENT FOR DRINKING WATER RESOURCE BY THE ULTRA RAPID COAGULATION PROCESS (KOREA) Tai Il Yoon, Chang Gyun Kim†, and Jung Soo Park Department of Environmental Engineering, Inha University, Inchon 402-751, Korea (received August 2002, accepted November 2002) Abstract: This study was performed to evaluate the applicability of the URC (ultra rapid coagulation) process in efficiently treating eutrophicated water that is to be used for drinking purposes. The results were compared to conventional coagulation-sedimentation processes via the use of jar testing. The injection of weighted coagulant additives (WCA), e.g. clay, bentonite and glass particles, significantly reduced the level of turbidity and aluminum concentrations. However, there was no significant removal of NOM (natural organic matter). Polymer addition reduced turbidity by up to 95% and UV$_{254}$ absorbed material by as much as 10%. The addition of secondary sludge into the URC system decreased sand filter head loss to the lowest extent when compared to the conventional coagulation-sedimentation processes. In addition, *Synedra acus*, a seasonally found diatom in the water-uptake process, was removed by up to 95% and dissolved aluminum reduced to levels as low as 0.02 mg/L. However, there was no significant change in the UV$_{254}$ of absorbed NOM. A pilot scale URC process was capable of efficiently treating rainfall run-off as demonstrated by the reduction of levels of aluminum to as low as 0.05 mg/L. The URC process can improve conventional water treatment systems as it has the advantages of removing residual aluminum and turbidity faster than the conventional processes, while maintaining a lesser extent of sand filter head loss. Key Words: aluminum, NOM, sand filtration, ultra rapid coagulation (URC), UV$_{254}$, weighted coagulant additives (WCA) INTRODUCTION Nutrient enriched source waters flowing into the water treatment facility have led to a deterioration in water quality. This has extended to the enhancement of residual NOM and algae growth, which in turn has increased DBP (disinfect by-product) formation after chlorination has been implemented.$^{1,2)}$ Residual aluminum sustained after the use of aluminum based coagulants can ascribe to Alzheimers disease.$^3)$ Furthermore, water with high levels of turbidity and low levels of alkalinity entering the treatment facility during a rainfall event can not be properly treated. Solid separation is not efficiently achieved even when the duration of the flocculation and sedimentation processes are extended. This subsequently increases sand filter head loss, thus shortening the backwash interval during the filtration process. Such problems will be improved such that occurrence of NOM, residual aluminum and *Synedra acus* should be efficiently minimized. This study was thus conducted to evaluate the removal of NOM and residual aluminum in eutrophicated lake water by the use of jar testing, pilot scale URC and sand filtration experiments. The removal of *Synedra acus* by filtration (a readily observed diatom which seasonally impacts water treatment efficiency) was also investigated. The URC process was used to treat water flowing into the lake during a rainfall event to gauge its future applicability as an advanced water treatment process. The amount of settled sludge in the lamella settler was also returned to a coagulation basin to enhance performance of coagulation process. **MATERIALS AND METHODS** Source water was obtained from a lake, located in Incheon, Korea, with a capacity of 2,300 m$^3$, depth of 0.7 m and retention period of 28 days. Water quality parameters such as COD$_{Mn}$, TSS, T-N, T-P, Chl-a, alkalinity, UV$_{254}$, DOC, turbidity, and aluminum were characterized in accordance with Standard Methods.$^4)$ The level of dissolved aluminum had been obtained after the sample was filtered with 0.2 $\mu$m of membrane filter. Zeta potential was obtained by a Zeta-Meta System 3.0 (USA). *Synedra acus*, diatom that was naturally present in the test lake, was observed by the Sedgewick-Rafter method. Powdered glass particles, clay and bentonite were used as weighted coagulant additives. Alum of liquid phase and anionic polymer of polyacrylamide (FLOPAM AN 934, SNF Floerger, France), generally used as a food processing additive, were utilized for the coagulant and flocculent, respectively. The anionic polymer has been used for water treatment in France, while it replaces the anionic groups on a colloidal particle and eventually permitting hydrogen bonding between the colloid and the polymer. Coagulant and flocculent demands were determined from a number of jar tests using Gators Jar. A pilot scale URC process was designed to treat 5 m$^3$/hr at rapid mixing (5 min), slow mixing (7 min) and settling (8 min) in a lamella separator. Meanwhile, secondary sludge was partly returned to the rapid mixing reactor, Figure 1. The URC system was also operated during a rainfall event. The effluent was introduced into a sand filter at a rate of 300 m$^3$/m$^2$·day so that any residual polymer, turbid material and/or diatom could be efficiently removed (Figure 2). Employing 300 m$^3$/m$^2$·day of filtration rate in the filter can be optimally come up with 300 m/day of overflow rate induced from the lamella separator. Backwash of the sand filter was conducted after nominal head loss had been reached. The filter was packed with 5 to 10 mm in diameter of gravel overlaid by 0.8 to 1.2 mm in effective size of sand with 1.4 of uniformity coefficient. **RESULTS AND DISCUSSION** **Source Water Quality** Water quality parameters of the lake water were characterized, Table 1. The lake was in a Table 1. Water quality parameters of the lake water | Parameter | Concentration | |--------------------|---------------| | COD$_{Mn}$ (mg/L) | 18.2 | | TSS (mg/L) | 47.7 | | T-N (mg/L) | 3.63 | | T-P (mg/L) | 0.094 | | Chlorophyll-a (mg/m$^3$) | 91.5 | | Alkalinity (mg/L) | 100 | | Parameter | Concentration | |--------------------|---------------| | UV$_{254}$ (1/cm) | 0.033 | | DOC (mg/L) | 12 | | Turbidity (NTU) | 19 | | Total Al (mg/L) | 0.3 | | Dissolved Al (mg/L)| < 0.1 | | *Synedra acus* | 52,833 | | (cells/mL) | | state of over-eutrophication as signified by the OECD, with the concentration of chlorophyll-a exceeding more than 75 mg/m$^3$. Levels of T-N and T-P were extremely presented over eutrophication state at the lake. The ratio of T-N/T-P was 38.6, which was also exceeded over minimal eutrophication level i.e. 20. Most of artificial water resources are highly eutrophicated such that the proposed lake will simulate whether if the water resources could be efficiently treated by the methods proposed in this study. The Effect of Varying Alum Dose and pH on Residual NOM and Aluminum Concentrations Dissolved aluminum and UV$_{254}$ absorbency decreased as the alum dose was increased, Figures 3 and 4. However, this is not necessarily indicative that the use of alum was solely decreasing the dissolved aluminum and UV$_{254}$ absorbency. It may be rather related with pH reduction due to CO$_2$ formation when Al$_2$(SO$_4$)$_3$ was converted into Al(OH)$_3$. It was necessary to clarify if this pH dependency was significant. Jar tests were thus conducted with an alum dose of 50 mg/L while the pH was varied from 4 to 9 using 1 N HCl or 0.1 N NaOH. The lowest level of dissolved aluminum corresponded to a pH of 6 while UV$_{254}$ absorbency was at its lowest for pH's of 5 and 6. This indicates that the low solubility of aluminum was at pH 6, while aluminum ion was more competitively reacted with negatively charged NOM rather than OH$^-$ ion over the pH range of 5-6 leading to the greatest reduction in UV$_{254}$ absorbency. Figure 3. The variance of dissolved aluminum concentrations and pH with increased alum dosage. Figure 4. The variance of UV$_{254}$ and pH with increased alum dosage. Figure 5. The variance of dissolved aluminum concentrations and UV$_{254}$ with increased pH (alum: 50 mg/L added). Their reductions were generally dependent upon pH while NOM removal was simultaneously related to the presence of the aluminum ion. The Effect of Adding Weighted Coagulants on Residual NOM and Aluminum Concentrations Three different weighted coagulants (glass particles, clay and bentonite) were added at a concentration of 50 mg/L to comparatively assess the capacity for the reduction of UV$_{254}$ absorbed organic material present in the lake water. Similar tests were performed using distilled water (to act as a control). The degree of reduction of UV$_{254}$ absorbed organic material removed from the lake water was not significant when compared to that of the control, Figure 6(a). Among them, bentonite displayed relatively greater magnitude of UV$_{254}$ absorbency being reduced. This is due to its higher adsorption capacity.$^6$ Specific surface area was also observed at 28.5, 86.2 and 113.9 m$^2$/g for glass, clay and bentonite, respectively.$^7$ This indicates that the 50 mg/L of weighted coagulants might be immediately saturated as it came into contact with the NOM present in the lake water. However, the addition of clay led to the observation of a higher extent of UV$_{254}$ and was possibly due to the amount of soil organic matter retained in the clay. In the contrast, the level of UV$_{254}$ observed in the distilled water sample was marginally reduced over the extended reaction time (Figure 6(b)). It showed that the adsorption capacity of weighted coagulants was gradually saturated with NOM in distilled water. UV$_{254}$ and dissolved aluminum were observed upon varying dose of weighted coagulants (glass particles, clay and bentonite) with 50 mg/L of alum and 1 mg/L of anionic polymer also added. UV$_{254}$ did not significantly differ for each WCA (Figure 7(a)). The UV$_{254}$ value observed for clay displayed the highest value for a WCA concentration of 200 mg/L indicating the release of soil organic matter. Bentonite displayed the highest NOM adsorption capability as it had the lowest UV$_{254}$ values when compared to the other two WCA’s. Dissolved aluminum concentration generally decreased with increasing dosage of weighted coagulants (Figure 7(b)). The addition of clay led to the lowest degree of residual dissolved aluminum, as it has the greatest cationic exchange capacity of the three WCA’s. Glass particles and clay reacted identically in removing more turbid suspended solids than that of bentonite. This indicates that bentonite reacts more efficiently with residual soluble organic and inorganic compounds rather than with fixed compounds. **The Effect of Added Settled Sludge on Residual NOM and Dissolved Aluminum Concentrations** Secondary sludge was added (12,000 mg TS(total solids)/L) immediately before alum was dosed at 50 mg/L into the jar. Weighted coagulants and anionic polymer were added at 50 and 1 mg/L, respectively. The addition of secondary sludge (Figure 8(a)) reduced zeta-potential as the negatively charged sludge shifted the charge balance in a more negative direction.$^8$ Sludge added in concentrations greater than 3,000 mg/L increased the turbidity by a factor of four when compared to concentrations of less than 3,000 mg/L. Within the sludge concentration range of 500 to 1,000 mg TS/L (Figure 8(b)), total aluminum was at the lowest due to the culmination of sorption of aluminum onto the Figure 8. Variation of water quality parameters on the addition of sludge: (a) turbidity, zeta-potential; (b) aluminum, UV$_{254}$. However, as the sludge concentration increased, the total concentration of aluminum exponentially increased due to the release of aluminum ions and its complex from the sludge itself. In comparison, soluble aluminum concentration marginally decreased as soluble aluminum had been steadily adsorbed onto the sludge. The addition of sludge incessantly increased the release of organic matters, thereby the UV$_{254}$ increased in response with the escalation of the concentration of sludge solids. The Effect of Polymer Addition on Concentrations of Residual NOM and Dissolved Aluminum Recently in the EU and USA there have been an increased number of water treatment facilities using polymers dosing as part of an efficient soil-liquid separation protocol. Polymer use can also reduce costs by a factor of ten when compared with the sole use of inorganic coagulants.\textsuperscript{1)} In this study, a food additive polymer was used as the flocculent. 500 mg/L of settled sludge was added into the jar just before alum was dosed at 50 mg/L based on zeta-potential. 1:1:1 of mixtures of weighted coagulants (i.e. glass particles, clay, and bentonite) were added at 50 mg/L. A 0.2 mg/L dose of polymer lowered turbidity by up to 95%, while total aluminum was reduced by up to 92% (Figure 9(a)). Dissolved aluminum was removed up to 63% for a 0.5 mg/L of polymer dose, while UV$_{254}$ was only removed up to 11% as shown (Figure 9(b)). Sand Filtration Five different sand filtration experiments were performed (Figure 10). In the first experiment, polymer was not added. Dosing conditions and reaction times were as follows; 1:1:1 of WCA mixtures (i.e. glass particles, clay and bentonite) added through runs 2 to 5 (70 mg/L); alum added through runs 1 to 5 (50 mg/L); secondary sludge added at runs 3 and 5 (10%); reaction time given through runs 1 to 3 (120 min), and run 4 and 5 (20 min). For runs 2 and 3, head loss and turbidity were similarly reduced, Figure 10(a). Furthermore, run 3, with 10% of secondary sludge added produced a slightly lower value than that of run 2. However, turbidity observed from run 3 was present at a higher level indicating that the amount of residual particulate material occurring from the sludge addition passed through the sand filter, Figure 10(b). Nevertheless, dissolved aluminum concentrations observed from the filtrate at run 3 was at its lowest (less than 0.02 mg/L), Figure 10(c). This indicates that the turbid material observed from run 3 might be more dominantly ascribed to glass particle type materials contained in the sludge than clay and bentonite because the dissolved aluminum was more efficiently removed by clay and, Figure 7(b). Extended reaction times of up to 120 min contributed to a greater decrease of head loss, turbidity and dissolved aluminum concentration as shown in run 3 than when compared to run 5 for a reaction time of 20 min. Figure 10. Evaluation of sand filtration without adding polymer: (a) head loss; (b) turbidity; (c) dissolved aluminum. Ebrie and Amano\textsuperscript{10} concluded that the level of dissolved aluminum could be more efficiently removed by adsorption on clay type particles, which are in turn captured by the filter. No significant removal of UV\textsubscript{254} absorbed material among the five runs was found as humic substances adsorbed on flocs can be readily released through the sand media during filtration.\textsuperscript{9} \textit{Synedra acus} was removed by as much as 95% in run 3. It was removed by sweep coagulation rather than charge neutralization as \textit{Synedra acus} size averaged 300 m, the similar size as alum flocs, which are more readily removed in the filtration run.\textsuperscript{11} **URC Pilot Test on Treating the Lake Water during Rainfall Event** Pilot tests were carried out under six different experimental conditions to treat rainwater run-off, Figure 11. Dosing conditions and reaction times were as follows; 1:1:1 of WCA mixtures (i.e. glass particles, clay and bentonite) added through: runs 2 to 6 (70 mg/L); alum dosed through runs 1 to 6 (50 mg/L); polymer added through runs 2 and 4 to 6 (0.2 mg/L); secondary sludge added at run 4 and 6 (10%); reaction time: given through runs 1 to 4 (120 min) and runs 5 and 6 (20 min). In general, rainwater runoff contains low alkalinity and high turbidity. The addition of polymer can reduce turbidity far below 1 NTU from runs 2, 4, 5, and 6. However, UV\textsubscript{254} absorbed materials did not vary under different dosing conditions. Dissolved aluminum decreased the greatest for runs 4 and 6, which correspondingly reflected the results observed from Figures 7(b) and 10(c) reporting 10% of added sludge. The level of dissolved aluminum was reduced by a factor of 3 for run 6 when compared to run 4 as polymer addition can further enhance dissolved aluminum removal during a short reaction time of 20 min. Extended reaction times of up to 120 min (run 4) may attribute to the occurrence of polymer derived flocs being broken-up, which consequently cause the increased release of dissolved aluminum from the flocs. This also correlates with a positive increase of zeta potential observed from run 6 when compared to run 4, Figure 11(a). The reaction time was very dependent upon the addition of the polymer. If a polymer was added, the reaction time should be shortened to achieve a high dissolved aluminum removal efficiency. In comparison, when the polymer was not added, the reaction time would need to be extended. **CONCLUSIONS** UV\textsubscript{254} absorbed matters were most reduced within the pH range of 5 to 6 by increasing the alum dosage, while dissolved aluminum was exposed to the lowest degree at pH ranging from 6 to 7, which is equivalent to its solubility limit. The addition of weighted coagulants did not significantly decrease UV\textsubscript{254} absorbed material. Of the three WCA’s tested, bentonite reduced UV\textsubscript{254} absorbed material the most. The addition of clay reduced dissolved aluminum and turbidity the greatest magnitude of up to 50%. The introduction of secondary sludge at 1,000 mg/L decreased the total aluminum concentration up to 40%. Low concentrations of polymer (0.2 mg/L) improved turbidity and dissolved aluminum removal by factors of 5 and 2, respectively, while UV\textsubscript{254} absorbed matters were lowered by as much as 10%. Without polymer addition, secondary sludge reduced sand filter head loss to the greatest extent. It also resulted in up to 95% of \textit{Synechra acus} being removed. A pilot scale URC process successfully removed dissolved aluminum at a rate six times faster than conventional processes. However, without polymer addition, the reaction time should be extended to enable sufficient time for the dissolved aluminum to react with the sludge. The introduction of secondary sludge and weighted coagulants can efficiently remove turbidity and dissolved aluminum over a given reaction time. However, the effective reduction of UV\textsubscript{254} required the proper addition of the food additive polymer. **ACKNOWLEDGEMENT** This work was partly supported by the Regional Research Center (RRC) program, the Ministry of Science and Technology (MOST) and the Korea Science and Engineering Foundation (KOSEF). **REFERENCES** 1. Kwak, J. W., Principles and Application of Physicochemical Water Treatment, Seoul: Sam Publication (1998). 2. Water Quality Status (WQS), Office of Water Treatment and Management of Inchon Metropolitan City, Sam Publication, Korea (2000). 3. Srinivasan, P. T., Viraraghavan, T., and Subramanian, K. S., “Aluminum in Drinking Water,” *Water SA*, **25**, 47–55 (1999). 4. APHA/AWWA/WEF, Standard Methods for the Examination of Water and Wastewater, 19th ed., New York, USA (1995). 5. Vollenweider, R. A. and Kerekes, J., OECD Cooperative Programme on Monitoring of Inland Water, Synthesis Report (1980). 6. Murada, H., Advanced Treatment of Municipal Wastewater, Science and Engineering Publication, Japan (1992). 7. Park, S. J., Kim, C. G., and Yoon, T. I., “The Study of Rapid Coagulation Adding Weighted Coagulant Additives And Settled Sludges,” *J. of KSEE*, **24**, 1325~1338 (2002). 8. Ali, W., O’Melia, C. R., Edzwald, J. K., “Colloidal Stability of Particles in Lakes: Measurement and Significance,” *Wat. Sci. Technol.*, **17**, 701~712 (1984). 9. Bose, P. and Reckhow, D. A., “Adsorption of Natural Organic Matter on Performed Aluminum Hydroxide Flocs,” *J. Environ. Eng.*, **124**, 803~811 (1998). 10. Ebie, K. and Amano, S., “Fundamental Behavior of Humic Acid Kaolin in Direct Filtration of Simulated Natural Surface Water,” *Wat. Sci. Technol.*, **27**, 61~70 (1993). 11. Chun, H. B., Lee, Y. J., Lee, D. J., and Lee, B. D., “Combination of Coagulants and Flocculent Optimally Removing Algae Blocked on Filter Media,” *Environ. Eng. Res.*, **27**, 61~70 (1998).
An Online Secret Sharing Scheme which Identifies All Cheaters Chan Yeob Yeun*, Chris J. Mitchell, Mike Burmester Information Security Group Royal Holloway, University of London Egham, Surrey TW20 0EX, UK {c.yeun,c.mitchell,email@example.com Abstract. A new scheme for computationally secure “online secret sharing” is presented, in which the shares of the participants can be reused. The security of the scheme is based on the intractability of factoring. This scheme has the advantage that it detects cheating and enables the identification of all cheaters, regardless of their number, improving on previous results by Pinch and Ghodosi et al. 1 Introduction A secret sharing scheme is a protocol in which a dealer distributes shares of a secret among a set of participants such that only sets of participants belonging to an access structure can recover the secret at a later time. Secret sharing schemes were independently invented in 1979 by Blakley [1] and Shamir [8]. In 1988, Tompa and Woll [9] demonstrated that Shamir’s original \((t,n)\) threshold scheme is vulnerable to cheating. That is, the last participant of an authorised set can always cheat the other participants during the reconstruction of the secret, without being detected. As a result the dishonest participant obtains the true secret while the other participants obtain a false one. Cachin [2] proposed a protocol for online secret sharing for general access structures, in which all the shares are as short as the secret. The scheme provides the capability to share multiple secrets and to dynamically add or remove participants online, without having to redistribute new secret shares to current participants. These additional features are obtained by storing authentic (but not secret) information at a publicly accessible location such as a notice board. Pinch [6] pointed out that Cachin’s scheme does not allow the shares to be reused after the secret has been reconstructed without a further distributed computation, as in Goldreich et al. [4]. Pinch presented a protocol for online multiple secret sharing, based on the intractability of the Diffie-Hellman problem, in which the shares can be reused. Ghodosi et al. [3] pointed out that Pinch’s scheme is also vulnerable to cheating. They presented a modified version of Pinch’s protocol which detects and prevents cheating, under the assumption that * The author is supported by a Research Studentship and Maintenance Award from RHBNC. a majority of the participants of the authorised reconstruction set are honest. However this scheme does not protect a minority of participants of the authorised set from a colluding majority, who falsely accuses the minority of cheating. We propose a computationally secure online secret sharing scheme which is based on the intractability of the factoring problem. Compared to Pinch’s scheme, and its modification by Ghodosi et al., our scheme has the following advantages: it detects cheating and enables the identification of all cheaters by an arbitrator, regardless of their number. The scheme does not rely on a “last participant” who reconstructs the secret on behalf of a minimal trusted set of participants: the responsibility is diffused among all participants. The proposed scheme has potential practical applications in situations where the participants, the access rules, or the secret itself frequently change. No new shares have to be distributed secretly when new participants join the system or participants leave. Such situations often arise in key management, escrowed encryption systems, and so forth. 2 Preliminaries A secret sharing scheme is a protocol involving a set \( P = \{P_1, \ldots, P_n\} \) of participants and a dealer \( D \), where \( D \notin P \). Let \( \Gamma \subset 2^P \) be an access structure. The dealer \( D \) chooses a secret \( K \) and distributes privately to each participant \( P_i \in P \) a share \( S_i \) of \( K \) such that: (i) any authorised set \( X \in \Gamma \) can reconstruct the secret \( K \) from its shares, (ii) no unauthorised set \( X \notin \Gamma \) can do so. Let \( \Gamma^* \subset \Gamma \) be the set of minimal authorised sets, that is, of sets \( X \) such that: \( Y \subseteq X \) and \( Y \in \Gamma \) implies that \( Y = X \). Let \( N = pq \) be the product of two large primes \( p \) and \( q \), and let \( e \) (\( 1 < e < \phi(N) \)) be chosen so that \( (e, \phi(N)) = 1 \), where \( \phi(N) = (p-1)(q-1) \). The values \( N \) and \( e \) are public, and the values \( p, q \) and \( \phi(N) \) are secret. Throughout this paper we work within the multiplicative group of integers modulo \( N \), and we shall assume that factoring \( N \) is infeasible [7]. In the secret sharing schemes we will describe below we shall make use of a one-way hash-function \( f \) which is collision-resistant. For further information see Sections 9.2 and 9.7 of [5]. In order to identify all cheaters, every participant will use an agreed digital signature scheme, and must have selected a private/public key pair for this scheme. Moreover, every participant must have a means of obtaining a verified copy of the public signature verification key of every other participant. This could, for example, be provided by having a Trusted Third Party (e.g. the dealer, \( D \)) certify the public key of every participant, and having every participant distribute their certificate with every signed message they send. 3 A secret sharing protocol We now present a new secret sharing protocol in which the participants of an authorised set compute the secret \( K \) by combining their secret shares in encrypted form. In this way the participants will not reveal their secret shares during the process of recovering $K$. The protocol uses a publicly accessible location, e.g. a notice board, where the dealer can store non-forgeable information accessible to all participants. This location will, at least, indicate the number of participants $n$ and the access structure $\Gamma$. The basic protocol to share the secret $K$ is as follows: First the dealer $D$ selects $N$ and $\epsilon$, and randomly chooses secret shares $S_i < N$, $1 \leq i \leq n$. Then $D$ transmits to each $P_i$ over a secure channel the share $S_i$, and securely stores $S_i$ for subsequent use to identify cheaters, if cheating is detected. For each minimal authorised set $X \in \Gamma^*$ the dealer $D$ uses $\epsilon$ and $N$ to compute $$T_X = K \oplus f(\prod_{x: P_x \in X} S_x^\epsilon \mod N),$$ where $\oplus$ denotes exclusive-or of bit-strings. The dealer $D$ posts the following items on the notice board: the four-tuple $(X, \epsilon, N, T_X)$ for every $X \in \Gamma^*$, and the value $f(K)$. A minimal authorised set $X \in \Gamma^*$ of participants can compute $K$ by performing the following steps: 1. Each participant $P_i \in X$ reads $f(K)$ and the values $\epsilon, N, T_X$ from the four-tuple corresponding to the appropriate set $X$ on the notice board. Then $P_i$ computes $S_i^\epsilon \mod N$ and signs the data $(S_i^\epsilon \mod N, X, \epsilon, N)$ using his/her private signature key to form $s_{P_i} = \text{sign}_{P_i}(S_i^\epsilon \mod N || X || \epsilon || N)$, where $||$ denotes concatenation of data items. Finally, $S_i^\epsilon \mod N$ and $s_{P_i}$ are sent by each participant $P_i$ to all the other participants in $X$. 2. Each participant $P_i \in X$ verifies all the signatures it has received, by using the public keys of the senders, and then computes $$V_X = \prod_{x: P_x \in X} S_x^\epsilon \mod N.$$ 3. Each participant $P_i \in X$ reads $T_X$ from the notice board and reconstructs $K$ as follows: $$K = T_X \oplus f(V_X).$$ One can easily verify the completeness of the protocol: every authorised subset $X \in \Gamma$ will recover $K$. A generalisation of this scheme can be used to share multiple secrets $K_h$, $h = 1, 2, \ldots, m$. It is possible to use the same one-way hash-function $f$ and the same set of secret shares $S_1, S_2, \ldots, S_n$ to share all the secrets $K_h$. Whenever a new secret $K_h$ is to be shared, the access structure may be different to that used for previous secrets, and hence we denote the access structure for secret $K_h$ by $\Gamma_h$. For each secret $K_h$ the dealer $D$ chooses a fresh pair $(\epsilon_h, N_h)$, where it is essential that $D$ chooses a distinct modulus $N_h$ for every secret $K_h$. For each $X \in \Gamma_h$ the dealer computes $$T_{X,h} = K_h \oplus f(\prod_{x: P_x \in X} S_x^{\epsilon_h} \mod N_h), \quad h = 1, 2, \ldots, m$$ and publishes the following items on the notice board: $$(X, \epsilon_h, N_h, T_{X,h}) \text{ and } f(K_h), \quad h = 1, 2, \ldots, m.$$ The reconstruction of the secret is as before. The properties of well-chosen pairs \((e_h, N_h)\) and the function \(f\), ensure that the reuse of the set of secret shares \(S_1, S_2, \ldots, S_n\) does not leak any information which may be useful to cheaters and/or other malicious users. ## 4 Analysis of the protocol The proposed protocol described in the previous section has the following properties. ### 4.1 How cheating may occur In both the proposed protocol and its generalisation to multiple secrets it is possible for one of the participants to cheat the others in such a way that the cheater will get the correct secret but the other participants do not. Suppose that participant \(P_j\) contributes a fake encrypted share \(S'\) instead of \(S_j^e \mod N\). Then every participant of the authorised set \(X\) will compute \(V_X\) incorrectly as \(V'_X = S' \cdot \prod_{x \neq j: P_x \in X} S_x^e \mod N\) instead of \(V_X = \prod_{x: P_x \in X} S_x^e \mod N\). However \(P_j\), who knows \(S_j^e \mod N\), can calculate the correct secret \(V_X\). ### 4.2 How to detect cheating In the initialisation phase of the scheme, the dealer \(D\) publishes \(f(K_h)\) on the notice board for every secret \(K_h\) that is being shared. Every participant, having reconstructed the secret (\(K'_h\), say), can verify its validity by hashing it and comparing the resulting hashed value \(f(K'_h)\) with the value on the notice board. If the verification fails, then most probably cheating has occurred in the protocol and thus the computed secret is not correct. This test detects cheating but does not identify the cheater(s). We now show how to identify all the cheaters. ### 4.3 How to identify all cheaters In the event of cheating having been detected by the method just described, the participants in the authorised set \(X\) can appeal to the dealer \(D\) to help discover the identity of the cheaters. Notice that the dealer will only be involved in arbitration after cheating has been detected, and will not need to be actively involved in the normal operation of the reconstruction phase of the scheme. In order to identify all cheaters, every participant \(P_i \in X\) sends to the dealer the data received during execution of the protocol, signed with their private key. The dealer verifies the signed data received from each \(P_i\), and compares the submitted value of \(S_i^e \mod N\), with that computed by using the stored value of the share \(S_i\). If a submitted value is different from the calculated value, then most probably \(P_i\) cheated. \(P_i\) cannot claim to have been framed, since \(D\) has \(P_i\)'s signature \(s_{P_i}\) on \((S_i^e \mod N || X || e || N)\). Therefore, the dealer will be able to identify all the parties who sent incorrect values during the protocol. This use of signatures will also protect a minority of participants of an authorised set from a colluding majority who falsely accuses the minority of cheating. 5 Conclusion We have presented a scheme which allows the reconstruction of an arbitrary number of secrets and provides the capability to dynamically add or remove participants online, without having to redistribute new shares secretly to current participants by storing additional authentic (but not secret) information on the notice board. In addition, this scheme can be used in such a way that cheating by participants will be detected, in which case the participants of an authorised set $X$ can request help from the dealer $D$, who can always uniquely identify the cheaters. 6 Acknowledgements The authors are grateful to Fred Piper for his support, and to Peter Wild and Karl Brincat for comments on an early draft of the paper. References 1. G.R. Blakley. Safeguarding cryptographic keys. In *Proceedings of AFIPS National Computer Conference*, pp. 313–317, 1979. 2. C. Cachin. On-line secret sharing. In C. Boyd, editor, *Proceedings of the 5th IMA Conference on Cryptography and Coding*, pp. 190–198. Springer-Verlag, 1995. 3. H. Ghodosi, J. Pieprzyk, G.R. Chaudhry, and J. Seberry. How to prevent cheating in Pinch’s scheme. *Electronics Letters*, 33(17):1453–1454, 1997. 4. O. Goldreich, S. Micali, and A. Wigderson. How to play any mental game or a completeness theorem for protocols with honest majority. In *Proceedings of 19th ACM Symposium on the Theory of Computing*, pp. 218–229, 1987. 5. A. Menezes, P. van Oorschot, and S. Vanstone. *Handbook of Applied Cryptography*. CRC Press, 1996. 6. R.G.E. Pinch. Online multiple secret sharing. *Electronics Letters*, 32(12):1087–1088, 1996. 7. R.L. Rivest, A. Shamir, and L. Adleman. A method for obtaining digital signatures and public key cryptosystems. *Communications of the ACM*, 21:120–126, 1978. 8. A. Shamir. How to share a secret. *Communications of the ACM*, 22:612–613, 1979. 9. M. Tompa and H. Woll. How to share a secret with cheaters. *Journal of Cryptology*, 1:133–138, 1988.
To, The Commissioner, Customs, Ahmedabad, Jamnagar, Kandla, Mundra Sir, Sub: Circulation of letters for deputation – reg. Please find enclosed herewith following letters regarding deputation for various posts for information and further necessary action at your end please. | Sr. No. | Subject | Received from | |--------|-------------------------------------------------------------------------|-------------------------------------------------------------------------------| | 1 | Preparation of panel for selection for the post of Superintendent (Group B) on deputation in the Directorates of Legal Affairs, New Delhi. Vide F.No. 1080/78/DLA/Admn./11 dated 25.02.2014. | (Shri Ashok Kumar Sagar) Additional Commissioner (Admn), Directorate of Legal Affairs, New Delhi. | | 2 | Preparation of panel for the post of Sr. Intelligence Officer on deputation basis in the Directorate General of Export Promotion, Delhi – reg. Vide F.No. DGEP/Admn/70/2011 dated 11.02.2015. | (Dr. Tejpal Singh) Directorate General of Export Promotion, New Delhi. | | 3 | Filling up one post of Joint Commissioner in the Office of the Competent Authority and Administration, Delhi on deputation basis – regarding. Vide F.No. A. 35017/09/2015-Ad.II dated 28.01.2015. | (Shri Jai Prakash Sharma) Under Secretary to the Government of India, CBEC, New Delhi. | | 4 | Filling up the post of Intelligence Officer at CEIB. Vide F.No. A.12026/1/2015-CEIB dated 06.02.2015. | (Shri R.K. Mahajan) Joint Secretary & DDC, CEIB, New Delhi. | Encl: Copy of letter listed at Sr.No.2 only, as the other letters are available at www.cbec.gov.in./Vacancy. Subject:- Filling up the post of Intelligence Officer in CEIB. The Central Economic Intelligence Bureau proposes to fill up 13 posts of Intelligence Officers from amongst the eligible officers on deputation basis for a period of three years initially in the Pay Band of Rs. 9300-34800 and Grade Pay Rs. 4600/- 2. Central Economic Intelligence Bureau is the nodal agency on economic intelligence for coordinating and strengthening the intelligence gathering activities and enforcement action by various agencies concerned with investigation into economic offences and enforcement of economic laws. The Intelligence Officer (I.Os) constitutes an important work force of the organization. The Intelligence Officer would be entitled for deputation allowance, as admissible. 3. It is requested that the vacancy status along with enclosed details of the post as enclosed may be circulated and also placed on the website for wide publicity. The details of willing Inspectors or officers holding analogous posts along with their ACRs for the last five years, integrity certificate and certificate that no minor/major penalty has been imposed during the last 10 years may be sent to this Bureau at the earliest for necessary action at our end. With regard, Yours sincerely, (R.K. Mahajan) Encl: a/a. All Chief Commissioners of Customs & Central Excise (CBEC). All Chief Commissioners of Income Tax (All) (CBDT), Director General, Boarder Security Force, B-10, CGO Complex, Lodi Road, New Delhi. Director General, Central Reserve Police Force, Block No. 1, CGO Complex, Lodi Road, New Delhi-3. Director General, Central Industrial Security Force, B-13, CGO Complex, Lodi Road, New Delhi-3. Director General, Indo Tibetan Border Police , Block 2, CGO Complex, Lodi Road, New Delhi-3. Copy to: 1. Director NIC, Department of Revenue, North Block, New Delhi. 2. DG, System & Data Management, 4th Floor & 5th Floor, Samrat Hotel, Chankyapuri, New Delhi with request to upload the circular in CBEC website. 3. C.H (Coordination), Office of The Chairman, CBDT, North Block, New Delhi with a request to upload the circular in CBDT website. 4. Commissioner (Coord.), Office of the Chairman, CBEC, North Block, New Delhi with a request to upload the circular in CBEC website. ## SERVICE PARTICULARS OF INTELLIGENCE OFFICERS IN THE CENTRAL ECONOMIC INTELLIGENCE BUREAU ### Details of Post | | Name of the post | Intelligence Officer | |---|------------------|----------------------| | 2. | Classification | General Central Services, Group ‘B’ Non-Gazetted, Non- Ministerial. | | 3. | Duty station | New Delhi. | | 4. | Pay Band + Grade Pay | PB-2 Rs.9300-34800 plus Grade Pay Rs.4600/- + Deputation allowance. - Officials who have been grated ACP/MACP may opt for Grade Pay of Rs.4600/- and deputation allowance or Grade of Rs.4800/- without deputation allowance. | | 5. | Mode of recruitment | Transfer on deputation of: 1. Inspectors/Preventive Officers of Customs and Central Excise and Income Tax Cadres holding analogous post on regular basis in the parent cadre/department having at least four years experience or 2. Officers holding analogous posts in CPOs such as IB, CBI etc. SEBI, Ministry of Company Affairs, DGFT, Ministry of Information & Technology etc. with four years experience. | | 6. | Period of Deputation | Not exceeding three years. (Period of deputation including period of deputation in another ex-cadre post held immediately preceding this appointment in the same or some other Organization/Deptt. of the Central Govt. shall ordinarily not exceed three years. | | 7. | Pay | The pay of the selected officers will be regulated in accordance with DOP&IT’s O.M. No.2/12/87-Estt. (Pay-II) dated 29.4.1988 as amended from time to time. | 1. Post Applied for : 2. Name of the applicant : 3. Date of entry in Govt. Service : 4. Present post held : 5. Date of appointment in the grade: Ad hoc Regular ACP/MACP 6. Present pay scale : 7. Experience : 8. Education Qualification : 9. Date of return from ex-cadre Post, if any : 10. Brief Service particulars : 11. Whether SC/ST : SIGNATURE OF THE APPLICANT Certificate by parent office:- 1. The information furnished by the candidate has been verified from records and is found to be correct. 2. The applicant is not in the promotion zone in the next three years. 3. No vigilance or disciplinary case or any other dispute is pending against the candidate. 4. Original/photocopies of the ACRs of the candidate for last 5 years are enclosed/being sent separately. 5. The candidate will be relieved with 15 days of the receipt of the letter of his appointment on deputation. SIGNATURE To All Chief Commissioners/Directors General under Central Board of Excise and Customs Subject: Filling up of one post of Joint commissioner in the Office of the Competent Authority and Administrator, Delhi on deputation basis – regarding. Sir/Madam, The Revenue Headquarter vide their letter F.No.A-12026/1/2015-SO(CA) dated 21.01.2015 has invited application for filling up of one post of Joint Commissioner in the Office of the Competent Authority and Administrator, Delhi on deputation basis. (copy enclosed). 2. It is requested to circulate it among the eligible officers under your charge and duly filled in applications of willing/eligible officers may be sent through proper channel to the Board, latest by 2nd March, 2015. Yours faithfully, (Jai Prakash Sharma) Under Secretary to the Government of India Tel: 23095520 Encl: As above Copy to: The Website Manager, Directorate of Systems, New Delhi with the request to put the above circular in the Department’s Website. VACANCY CIRCULAR It is proposed to fill up one vacancy of Joint Commissioner in the Office of the Competent Authority and Administrator, Delhi on deputation basis. 2. As per Recruitment Rules, the post of Joint Commissioner, in the Office of the Competent Authority is classified as General Central Service, Group ‘A’ Gazetted, Non-Ministerial. The pay scale is Rs.15600-39100 with grade pay Rs.7600. The method of recruitment is by deputation. Grades from which deputation is to be made are: Officers under the Central Government: (i) Holding analogous posts on a regular basis in the present cadre or department; or (ii) with five years’ service in the grade rendered after appointment thereto on regular basis in the scale of pay Rs.15600-39100 with grade pay Rs.6600 or equivalent in the parent cadre or department; and (iii) possessing 10 years’ experience in enforcement of regulatory laws or investigation of offences and collection of intelligence relating thereto. (Period of deputation including the period of deputation in another ex-cadre post held immediately preceding this appointment in the same or some other organization or department of the Central Government shall ordinarily not exceed four years. The maximum age limit for appointment by deputation shall not exceed 56 years as on the closing date of receipt of application). 3. It is, therefore, requested that this vacancy may be circulated in your organization and applications of officers fulfilling the above mentioned eligibility criteria may be sent to the undersigned along with the following documents: (i) Bio-data in the prescribed performa enclosed (ii) Cadre Clearance (iii) Vigilance Clearance (iv) Agreed List status (wherever applicable) (v) History of posting (vi) No Penalty Certificate for the last 10 years (vii) Certified copies of the ACRs for the last 5 years 4. The last date of receipt of application is 25th March, 2015. Further information is available at website: cadelhi.gov.in Encl: As above (S.Bhowmick) Under Secretary to the Govt. of India Tel. No.23095359. Deputy to take place for circulation of the post as held from 1st July 2014 Sd/- 22-1-15 Sd- U.S.F. To 1. Shri B.K.Jha, DGIT(HRD), CBDT, ICADR Building, Plot No.6, Vasant Kunj Institutional Area Phase II, New Delhi-110070. 2. Shri Lok Ranjan, Joint Secretary (Admn.) CBEC, Deptt. of Revenue, New Delhi. 3. All Ministries/Departments. 4. Enforcement Directorate, 6th Floor, Lok Nayak Bhavan, Khan Market, New Delhi-110003. 5. Central Bureau of Narcotics, Narcotics Commissioner of India, 19, The Mall, Morar, Gwalior-474006. 6. Competent Authority, Delhi, with the request to upload this circular on its website. 7. Competent Authorities, Chennai, Kolkata, Mumbai. 8. Registrar, Appellate Tribunal for Forfeited Property, Delhi. 9. US(Ad.II), CBEC with the request to forward the applications to CA Cell, alongwith cadre and vigilance clearance and ACRs for the last 5 years etc. 10. US(Ad.VI) CBDT with the request to forward the applications to CA Cell, alongwith cadre and vigilance clearance and ACRs for the last 5 years etc. 11. Director (NIC) for hosting the vacancy circular on the website of Deptt. of Revenue. 12. Section Officer (CC) for similar action. 13. Directorate of Revenue Intelligence, D Block, I.P.Bhavan, I.P.Estate, New Delhi-110002. 14. Narcotics Control Bureau, Ministry of Home Affairs, North Block, New Delhi. 15. Director General, Border Security Force, Block No.1, CGO Complex, Lodhi Road, New Delhi. 16. Director General, Central Reserve Police Force, Block No.1, CGO Complex, Lodhi Road, New Delhi. 17. Director General, Central Industrial Security Force, CGO Complex, Lodhi Road, New Delhi. 18. Director General, Assam Rifles Shillong-10 through LOAR, Room No.171, North Block, New Delhi. 19. Director General, Indian Tibet Boarder Police, Block No.2, CGO Complex, New Delhi. 20. Director General, Sashtra Seema Bal, East Block V, R.K.Puram, New Delhi. 21. Director General, National Security Guard, Mehram Nagar near Domestic Airport, New Delhi-110037. (S.Bhowmick) Under Secretary to the Govt. of India Tel. No. 23095359. 13. Name of the Applicant: 14. Date of Birth: 15. Date of Retirement: 16. Educational Qualification: 17. Present Post held, date from which Held: 18. Scale of pay, Basic Pay and Grade Pay: 19. Experience in the subject field: 20. Brief service particulars: 21. Nature of duties performed in brief: 22. History of Posting: 23. Whether belongs to SC/ST: 24. Remarks/Any other information: Signature of the applicant with date Tel/Fax No. (Office) (Mobile) Certificate by Parent Office: The information furnished by the candidate has been verified from the records and is found correct. Signature With rubber stamp [Signature] To The Chief Commissioner of Central Excise & Service Tax (All), The Chief Commissioner of Customs (All), The Director General (All), Sir/Madam, Sub:- Preparation of panel for the post of Sr. Intelligence Officer on deputation basis in the Directorate General of Export Promotion, Delhi - reg. It is proposed to draw a panel of suitable and eligible officers to fill up a vacancy of Sr. Intelligence Officers (Group 'B') in the office of Directorate General of Export Promotion, New Delhi. 2. This post will be filed up on deputation basis from amongst officers of the similar rank (i.e. Superintendent/Apprasiar) working in the formations of customs and Central Excise or officers holding analogous posts in the Directorate General/Directorates under CBEC in the same pay scale. The Deputation Allowance will be paid as per the instructions of the Govt. from time to time. 3. Such officer selected for a posting in the Directorate General of Export Promotion would normally for a period of three years, extendable by another two years by the Director General of Export Promotion. 4. It is requested that this vacancy and communication may kindly be circulated among officers in your jurisdiction and the applications of interested and eligible officers below the age of 56 years may be forwarded to this office on or before 10th March, 2015 with their full bio-data including their History of postings alongwith ACR for the last 5 years, Vigilance Clearance and No Objection Certificate. 5. In case, while working in this Directorate General of Export Promotion the work and conduct of the officer is not found to be satisfactory, the officer can be repatriated to the Parent Commissionerate before completion of the aforesaid deputation period. Yours faithfully, (Deeptpal Singh) Addl. Director General F.NO. 1080/78/DLA/Admn./12 /160 To, 1. All Chief Commissioner of Customs/Central Excise 2. All Director Generals under CBEC 3. All Commissioners of Central Excise/Customs/Service Tax Sir/ Madam, Subject: Preparation of Panel for selection for the posts of Superintendent (Group B) on deputation in the Directorate of Legal Affairs, New Delhi. It is proposed to draw a panel of suitable and eligible officers for the posts of Superintendent, in the scale of pay of Rs.9300-34800 with GP Rs.4800/- or above in PB-2 to be filled up on deputation basis amongst officers who are and working in the grade of Superintendent, in the Directorate of Legal Affairs, at New Delhi. 2. The period of deputation for the selected person, would ordinarily for three years. The officers on deputation will be entitled to a Special Pay (Allowance) @ 5% / 10% of their basic pay during the tenure of their posting as Superintendent in this Directorate. Cadre control for the purpose of promotion etc. of such staff would continue with parent cadre controlling Commissionerate to which such deputed person belongs. Request for further deputation/repatriation will normally not be entertained before completion of two years in this Directorate. The Directorate of Legal Affairs, however, reserves the right to revert the officers at any time without assigning any reason. 3. It is requested that this letter may kindly be widely circulated and applications from Interested Superintendents under your jurisdiction forwarded to this Directorate. The applications may be made as per the enclosed format ‘A’ along with ACR grading of last five years, vigilance certificate and no objection to release the officer in the event of his appointment on deputation basis. Selection may be made on first come first serve basis amongst suitable candidates. 4. The circular is valid for only ten days, subject to availability of suitable candidates, and the decision of this directorate in this regard shall be final & binding and no correspondence in this regard shall be entertained. This issues with the approval of Commissioner, Directorate of Legal Affairs, New Delhi. Yours Faithfully, (Ashok Kumar Sagar) Assistant Commissioner (Admn.) Encl: As above Application form for deputation in the Directorate of Legal Affairs: 1. Name: 2. Designation: 3. Date of Birth: 4. Name of the parent Commissionerate: 5. Present place of posting: 6. Educational qualification: 7. History of Postings: (please attach a separate sheet of paper of the space provided if not adequate) | S.No | Commissionerate/ Directorate | Station | Section/Charges held | Date | Remarks If any | |------|-----------------------------|---------|----------------------|------------|----------------| | | | | | From | To | | | 1 | 2 | 3 | 4 | 5(A) | 5(B) | 6 | 8. Self-appraisal of suitability for this post: Verified from the Service Book of the officer concerned Signature of the officer Signature & Stamp of the verifying officer
1 Introduction At a big conference in Wisconsin in 1948 with many famous economists and mathematicians, George Dantzig gave a talk related to linear programming. When the talk was over and it was time for questions, Harold Hotelling raised his hand in objection and said “But we all know the world is nonlinear,” and then he sat down. John von Neumann replied on Dantzig’s behalf “The speaker titled his talk ‘linear programming’ and carefully stated his axioms. If you have an application that satisfies the axioms, well use it. If it does not, then don’t.” [Dan02] In this lecture, we will see some examples of problems we can solve using linear programming. Some of these problems are expressible as linear programs, and therefore we can use a polynomial time algorithm to solve them. However, there are also some problems that cannot be captured by linear programming in a straightforward way, but as we will see, linear programming is still useful in order to solve them or “approximate” a solution to them. 2 Maximum Flow Problem Max (s-t) Flow Problem is an example of a true linear problem. The input is a directed graph $G = (V, E)$ with two special vertices, a “source” $s \in V$, and a “sink” $t \in V$. Moreover, each edge \((u, v) \in E\) has a “capacity” \(c_{uv} \in \mathbb{Q}_{\geq 0}\). In Figure 2, we can see an example of such graph. Capacities represent the maximum amount of flow that can pass through an edge. The objective of the problem is to route as much flow as possible from \(s\) to \(t\) [Wik13a]. The maximum flow problem was first studied in 1930 by A.N. Tolstoý as a model of Soviet railway traffic flow [Sch02] (e.g. nodes are cities, edges are railway lines, there is a cement factory at \(s\), there is a plant that needs cement at \(t\), and capacities represent how much cement a railway can ship in one day in some units). For every node \(v \not\in \{s, t\}\), the incoming flow to \(v\) must be equal to the outgoing flow from \(v\). This is called *flow conservation* constraint. Considering this, we can express the problem as a linear program as follows: \[ \max_f \quad \sum_{v : s \rightarrow v} f_{sv} - \sum_{u : u \rightarrow s} f_{us} \] \[ \text{s.t.} \quad \sum_{u : u \rightarrow v} f_{uv} = \sum_{w : v \rightarrow w} f_{vw}, \quad \forall v \neq s, t \quad \text{(flow conservation)} \] \[ 0 \leq f_{uv} \leq c_{uv}, \quad \forall (u, v) \in E \quad \text{(capacity constraints)} \] Now, it’s easy to see that any feasible solution to this program is a feasible flow for the graph. In Figure 3, we can see an optimal solution to the example of Figure 2, where the labels in the edges are of the form \(f_{uv}/c_{uv}\). The fact that Max Flow can be expressed as a linear program, tells us that it can be solved in polynomial time. Not only that, but since there are efficient algorithms for linear programming, it can also be solved efficiently. We also mention that there are many more efficient algorithms for Max Flow, like for example the Ford–Fulkerson algorithm which is more combinatorial in nature, but the approach above shows us better the power of linear programming. 3 Maximum Perfect Matching in Bipartite Graphs This problem can be solved using the Max Flow problem, but for illustrative purposes, we will see a different approach here. In this problem we have a bipartite graph $G = (U, V, E)$ with $|U| = |V| = n$, and each edge $\{u, v\} \in E$ has a weight $w_{uv} \in \mathbb{Q}$. The objective is to find a perfect matching of maximum weight. Without loss of generality, we can assume that a perfect matching exists in the graph, because if one doesn’t exist we can add some edges with weights $-\infty$ (or 0 if all the weights are non-negative), in order to create a perfect matching. We can imagine the vertices in $U$ as $n$ people, the vertices in $V$ as $n$ jobs, and the weight of an edge $\{u, v\}$ as the ability of person $u$ to do the job $v$. Our goal is then to assign one job to each person in the best possible way. This problem cannot directly transformed to a linear program. But instead, we can write it as an integer linear program (ILP). An integer linear program is like a linear program (LP), but with some additional constraints that some of the variables are integers. Suppose we have an indicator variable $x_{uv}$ for every edge $\{u, v\} \in E$, such that $x_{uv} = 1$ if $\{u, v\}$ belongs to the perfect matching, and $x_{uv} = 0$ otherwise. Then, we can write the problem as an integer linear program (ILP) as follows: $$\max_x \quad \sum_{\{u,v\} \in E} w_{uv} x_{uv}$$ s.t. $$\sum_{u \sim v} x_{uv} = 1, \quad \forall u \in U \quad \text{(every person has a job)}$$ $$\sum_{u \sim v} x_{uv} = 1, \quad \forall v \in V \quad \text{(every job is assigned to someone)}$$ $$x_{uv} \in \{0, 1\}, \quad \forall \{u, v\} \in E \quad \text{(integer constraints)}$$ Now suppose that in the above program we drop the integer constraints of the form $x_{uv} \in \{0, 1\}$, and we replace them with the linear constraints $0 \leq x_{uv} \leq 1$, $\forall \{u, v\} \in E$. Then, we get a linear program (LP), which can solve in polynomial time. This procedure is called a relaxation of the integer program. **Relaxation Facts:** - If the LP is infeasible, then the ILP is infeasible. - If the LP is feasible, then either the ILP is infeasible, or the ILP is feasible and $\text{LP}_{OPT} \geq \text{ILP}_{OPT}$ (in case of a maximization problem). The above two facts hold because as we can see from the corresponding programs, every feasible solution to the ILP is a feasible solution to the LP. The second fact is useful because it gives an upper bound to the optimal solution we are searching for. Therefore, one approach is to use some heuristic to solve our problem, and if the objective is close to the other bound, then we know that we are “near” the optimal solution. However, in this case, even more is true, which are not true in general for every relaxation. More specifically, we have the following theorem. **Theorem 3.1.** All extreme points of the LP are integral. **Proof.** We will prove the contrapositive, if $\tilde{x}$ is a feasible, non-integral solution to LP, then it is not an extreme point, i.e. we can write $\tilde{x} = \frac{1}{2}x^+ + \frac{1}{2}x^-$, for two distinct feasible solutions $x^+$ and $x^-$. Since $\tilde{x}$ is non-integral, there is some $\tilde{x}_{uv}$ that is non-integral. The sum of the edges incident to $v$ is 1, therefore there is at least one other edge incident to $v$ that is non-integral, say $\{v, z\}$. Then, look at vertex $z$. Again there is one other edge incident to $z$ that is non-integral. This procedure can’t go on forever, therefore at some point we will end up with a cycle that returns to $v$. The cycle is “non-integral”, therefore $\exists \epsilon > 0$ such that $\epsilon < \tilde{x}_{uv} < 1 - \epsilon$, for every $\{u, v\}$ on the cycle. The graph is bipartite, therefore the cycle is even. Add $\epsilon$ to all odd edges along the cycle, and $-\epsilon$ to all even edges. The sum of the edges along the cycle remains the same, therefore you get a feasible solution $x^+$. Similarly, if you add $\epsilon$ to all even edges along the cycle, and $-\epsilon$ to all odd edges, you get a feasible solution $x^-$. The solutions $x^+$ and $x^-$ are distinct, since $\epsilon > 0$, and it holds that $\tilde{x} = \frac{1}{2} x^+ + \frac{1}{2} x^-$, because when we add something in an edge in $x^+$, we subtract the same thing in $x^-$. **Observation 3.2.** At least one of $x^+, x^-$ in the proof has objective at least as good as $\tilde{x}$. Suppose you solve the LP with an LP solver, which gives an optimal solution which is non-integral (it is not a vertex of the polytope). The proof above gives an algorithm to convert this solution to an optimal integer solution. Therefore, the theorem implies that the problem can be solved in polynomial time. **Fact 3.3.** The “integrality” property of the theorem holds for any linear program of the form: $$\max / \min \quad c \cdot x$$ $$s.t. \quad b' \leq Mx \leq b,$$ $$\ell \leq x \leq u,$$ where $b', b, \ell, u$ are integer vectors (possibly $\pm \infty$), and $M$ is totally unimodular. A matrix is called totally unimodular if all of its square submatrices have determinant $+1, -1,$ or $0$. ## 4 Minimum Vertex Cover In the Min Vertex Cover problem, the input is a graph $G = (V, E)$ with a cost $c_v \geq 0$ for every vertex $v \in V$. A *vertex cover* is a set of vertices $S \subseteq V$ such that all edges in $E$ have at least one endpoint in $S$. The goal is to find a vertex cover $S$ in the graph that minimizes the quantity $c(S) = \sum_{v \in S} c_v$. The Min Vertex Cover problem is NP-hard. Therefore, we can’t expect to solve it to optimality using linear programming. However, as before we can express it as an integer linear program. Let $x_v$ be an indicator variable that is 1 if $v$ belongs to the vertex cover, and 0 otherwise. Then, we can write the problem as follows: \[ \begin{align*} \min_x & \quad \sum_{v \in S} c_v x_v \\ \text{s.t.} & \quad x_u + x_v \geq 1, \quad \forall \{u, v\} \in E \quad (\text{every edge is covered}) \\ & \quad x_v \in \{0, 1\}, \quad \forall v \in V. \end{align*} \] As before, we can relax this integer program by replacing the constraints $x_v \in \{0, 1\}$ with the constraints $0 \leq x_v \leq 1$, to get a linear program (LP). We know that $\text{LP}_{OPT} \leq \text{OPT}$. Suppose we solve the LP to optimality, and we get back an optimal LP-feasible solution $x^*$ (often called “fractional solution”). **Example 4.1.** Let $G$ be a $K_3$ with three vertices $u, v, w$, and costs $c_u = c_v = c_w = 1$. The optimal vertex cover consists of two vertices, therefore $\text{OPT} = 2$. The optimal fractional solution is to assign $\frac{1}{2}$ to every vertex, i.e. $x_u = x_v = x_w = \frac{1}{2}$, therefore $\text{LP}_{OPT} = \frac{3}{2}$. In general, if $G$ was a clique $K_n$, the optimal vertex cover would have cost $\text{OPT} = n - 1$, but $\text{LP}_{OPT} \leq \frac{n}{2}$, since we can always assign $\frac{1}{2}$ to every vertex. This means that there is a gap of about 2 between the optimal integer solution and the optimal fractional solution. *LP Rounding* is called the procedure that takes an LP-feasible solution and somehow converts it to an actual ILP-feasible solution with almost as “good quality” as the LP-feasible solution. For the Vertex Cover problem, say $\widetilde{x}$ is a feasible LP solution. Define $S = S_{\widetilde{x}} = \{v : \widetilde{x}_v \geq \frac{1}{2}\}$. **Fact 4.2.** $S$ is a valid vertex cover. *Proof.* For every edge $\{u, v\} \in E$, it holds that $x_u + x_v \geq 1$. This means that at least one of $x_u$ and $x_v$ is at least $\frac{1}{2}$, therefore at least one of $u$ and $v$ belongs to $S$. \qed **Fact 4.3.** $\text{cost}(S_{\widetilde{x}}) \leq 2 \cdot \text{cost}_{LP}(\widetilde{x})$ *Proof.* It holds that $\text{cost}_{LP}(\widetilde{x}) = \sum_{v \in V} c_v \widetilde{x}_v \geq \sum_{v \in S} c_v \widetilde{x}_v \geq \sum_{v \in S} \frac{1}{2} c_v = \frac{1}{2} \text{cost}(S)$ ($\geq \frac{1}{2} \text{OPT}$). \qed Therefore, we have that \[ \frac{1}{2} \text{OPT} \leq \text{LP}_{OPT} \leq \text{OPT}, \] and the first inequality is the best possible due to the clique we’ve seen in Example 4.1. The above rounding procedure gives a poly-time algorithm that finds a solution with value at most $2\text{LP}_{OPT} \leq 2\text{OPT}$. Therefore, this is a 2-approximation algorithm for Vertex Cover. It is also known that if the unique games conjecture is true, then Vertex Cover cannot be approximated within any constant factor better than 2 [Wik13b]. 5 Duality Suppose you have a linear program of the form: \[ \begin{align*} \max_x & \quad c \cdot x \\ \text{s.t.} & \quad a^{(1)} \cdot x \leq b_1, \\ & \quad a^{(2)} \cdot x \leq b_2, \\ & \quad \ldots \ldots \\ & \quad a^{(m)} \cdot x \leq b_n. \end{align*} \] Let \( \lambda_1, \lambda_2, \ldots, \lambda_m \geq 0 \) be numbers such that if you multiply the \( i \)-th constraint with \( \lambda_i \) and you add all the constraints together, you get \( c \cdot x \leq \beta \), for some number \( \beta \). Then, you know that \( \beta \) is an upper bound to the optimal solution of the linear program. Therefore, you want to find \( \lambda_i \)'s that achieve the minimum possible \( \beta \). In other words, you want to solve the following linear program: \[ \begin{align*} \min_\lambda & \quad \sum_{i=1}^m b_i \lambda_i \\ \text{s.t.} & \quad \lambda_1 a_{11} + \lambda_2 a_{21} + \ldots + \lambda_m a_{m1} = c_1, \\ & \quad \lambda_1 a_{12} + \lambda_2 a_{22} + \ldots + \lambda_m a_{m2} = c_2, \\ & \quad \ldots \ldots \\ & \quad \lambda_1 a_{1n} + \lambda_2 a_{2n} + \ldots + \lambda_m a_{mn} = c_n, \\ & \quad \lambda_1, \lambda_2, \ldots, \lambda_m \geq 0. \end{align*} \] This is called the dual linear program. Farkas Lemma, we’ve seen in the previous lecture, implies that if the original linear program (the primal) is feasible, then the dual is feasible, and the two LP’s have the same value (which may be \( \pm \infty \)). **Rule of Life:** If you have an LP, you should take its dual and try to “interpret” it. To see an example of what this means, let’s go back to the Max Flow problem. If we take its linear program and clean it up a little bit (e.g. if there is a constraint with something \( \leq b \) and another one with the same thing \( \geq b \), we convert it to a constraint of the form something \( = b \)), we get the following dual linear program: \[ \begin{align*} \min_{\lambda, \mu} & \quad \sum_{(u,v) \in E} c_{uv} \lambda_{uv} \\ \text{s.t.} & \quad \mu_s = 1, \\ & \quad \mu_t = 0, \\ & \quad \lambda_{uv} \geq 0, \quad \forall (u,v) \in E \\ & \quad \lambda_{uv} \geq \mu_u - \mu_v, \quad \forall (u,v) \in E, \end{align*} \] where the variables \( \lambda_{uv} \) correspond to the capacity constraints of the Max Flow, and the variables \( \mu_v \) correspond to the flow conservation constraints. This dual linear program is the natural LP relaxation of the ILP for the Min (\( s-t \)) Cut problem. In this problem, we want to find a set of vertices \( S \subseteq V \) such that \( s \in S \) and $t \not\in S$, which minimizes the quantity $\sum_{u \in S, v \not\in S} c_{uv}$. The variables $\mu_v$ are indicator variables that say if $v$ belongs to $S$ or not, and the variables $\lambda_{uv}$ are indicator variables that say if the edge $(u, v)$ goes from the set $S$ to the set $V \setminus S$ or not. From the observation above, we conclude that $$\text{Opt Max } s-t \text{ Flow} = \text{Opt Min fractional } s-t \text{ Cut} \leq \text{Opt Min } s-t \text{ Cut}.$$ It turns out that the inequality is actually an equality, which means that the Min Cut problem is also solvable in polynomial time. ### References [Coo11] William Cook. *In Pursuit of the Traveling Salesman: Mathematics at the Limits of Computation*. Princeton University Press, 2011. [Dan02] George B Dantzig. Linear Programming. *Operations Research*, pages 42–47, 2002. [Sch02] Alexander Schrijver. On the history of the transportation and maximum flow problems. *Mathematical Programming*, 91(3):437–445, 2002. [Wik13a] Wikipedia. Maximum flow problem. http://en.wikipedia.org/wiki/Maximum_flow_problem, 2013. [Online; accessed 27-October-2013]. [Wik13b] Wikipedia. Vertex cover. http://en.wikipedia.org/wiki/Vertex_cover, 2013. [Online; accessed 27-October-2013].
There is a lot of really interesting stuff on this newsletter and things we should all get involved with. Congratulations to Simone for his TED talk this week. https://www.facebook.com/371705019875183/posts/1033591143686564/?d=n&substory_index=0 **Coffee Meets** These are very popular and are a way to come together for a coffee and chat! It’s the same venue on the last Saturday of the month! Come and join us! Day: Saturday Time: 10am - 12noon - Venue: Riverside Terrace Cafe, Southbank Centre, Belvedere Rd, Lambeth, London SE1 8XX 29th February - 28th March - 25th April - 30th May - 27th June - 25th July - No meeting in August 26th September - 31st October - 28th November **BOXING FOR – Go And show Parkinson’s whose boss!!** JOIN US ON MONDAYS at LONDON BOXING ASSOCIATION GYM 1.30—2.30pm every Monday £5.00 per 1 hour session. Boxing Gym, Units 3&4 Bellenden Road Business Centre, Bellenden Road, Peckham Rye, London, SE15 4RF Easy parking outside http://www.londoncommunityboxing.co.uk **POPPING FOR PARKINSON’S** (Please always check Popping for Parkinson’s Facebook Page as dates change) Popping for Parkinson’s by Simone Sistarelli Thursday at 7 pm - 8pm. FOC. Garden Room, The Wimbledon Club, SW19 5AG Email Simone Sistarelli - email@example.com Dates : January 9th, 16th, 23rd 30th February 6th, 13th 27th March 5th, 12th, 19th 26th April 2nd, 16th, 23rd, 30th May 7th, 14th 18th June 4th, 11th, 18th **New to our repertoire—Indoor Bike session** Sunday 16th is the start of the PD spin cycling class at fitness first gym 12pm start opposite the grand theatre Clapham junction high street. Sounds great fun! **FEBRUARY 2020—Fundraising** We are pleased to announce that we have donated another cheque for £10,000 raised from our fundraising activities to Heather Mortiboys research project. https://www.parkinsons.org.uk/news/new-sheffield-based-virtual-biotech-programme-aims-save-brain-cells If you are organising any fundraising let us know, as we have a special arrangement with PUK, where every penny raised via SLYPN goes to Research. It’s well worthwhile! All you do is join our SLYPN fundraising team on just giving and then PUK will donate every penny you raise will go towards Heather Mortiboys research project. Heather is going to be in London on 5th March giving a talk on her Research. (See flyer below) **We have 3 tickets available** .. email Sarah if interested in attending. It’ll be on first come, first served basis. COULD ENERGY BE THE KEY TO STOPPING PARKINSON’S? Parkinson’s UK invites you to join us for The Florence Pite Memorial Lecture 2020 Francis Crick Institute, 1 Midland Rd, London NW1 1AT Thursday 5 March 2020 | 6.45pm to 9pm Doors open at 6.15pm RSVP by Friday 7 February 2020 firstname.lastname@example.org | 020 7932 1369 PARKINSON’S UK CHANGE ATTITUDES, FIND A CURE, JOIN US. APRIL 2020 1st - 3rd April 2019 Get ready for The INSIGHT Into PD Summit. We’ve partnered with many organisations around the globe to bring you even more coverage and insight into the condition. - 1st - 3rd April 2020 - FREE for the 3 day live conference - Anytime. Anywhere. Online Together, we can work towards finding a unified cure. Registration is open and you can sign up here: https://bit.ly/37LFqu6 April 11th - World Parkinson’s Day 2020 - share your story and be featured on our map 1 million people in the UK are affected by Parkinson’s. Either by living with the condition themselves or through a loved one, friend or colleague. That means that if you’re in the UK and you know 66 people, chances are you will know someone affected by Parkinson’s in some way. This World Parkinson’s Day, we’re inviting the public to ask themselves if they really know Parkinson’s. Through sharing our stories, we can show them the stark reality of Parkinson’s and see how much they really know. How can I be part of this? We want to hear the real stories of the impact of Parkinson’s on your life. To show the variety of challenges that people face. To lift the lid on preconceptions about symptoms and exactly who is affected by a Parkinson’s diagnosis. But we also want to show that you don’t have to take Parkinson’s lying down. Challenges can be overcome. Finding the funny in daily situations brings light relief. Ultimately, we want to inspire people and change how they think about Parkinson’s, to learn more and take action. **How can I share my story?** There are two ways to be part of this: 1. **Fill in this form** to share your story with us before **Friday 20 March** and your story could make it onto the UK map of 66 diverse and representative stories. We’ll let you know when we’ve received it and if you have any problems, email us at [email@example.com](mailto:firstname.lastname@example.org) If you give your permission, we might also tell your story through the media to drive awareness. We’ll be in touch to discuss this. If you would prefer to do a video then please see the guidance below. 2. If you’re not able to share before the deadline or would prefer not to be on the map, you can still share your story on social media channels from 11 April, using the hashtag #KnowParkinsons so that people can find out more from the community. Once we have received your story we will be in contact to get a photo or further information so please leave a contact number or email address on the form below. To get involved [https://www.parkinsons.org.uk/get-involved/world-parkinsons-day-2020](https://www.parkinsons.org.uk/get-involved/world-parkinsons-day-2020) | Name (or how you want it to appear on the wall) | | |-----------------------------------------------|---| | Email address | | | Phone number | | | Location (town and first part of postcode) | | | Age (If you are willing to share) | | **Connection to Parkinson’s (please tick)** - I have Parkinson’s - My husband, wife or partner has Parkinson’s - My friend or family member has Parkinson’s - I’m an employer or colleague of someone with Parkinson’s - I’m a health or social care professional (working with people with Parkinson’s) - I have another connection to Parkinson’s Write here what you would like to share about your experience to help increase understanding of Parkinson’s (50 - 250 words) *Entries may be edited for clarity or length. You will be contacted to approve any changes made before your story is uploaded.* Would you be happy to speak to a journalist (print/radio/television (circle all you are willing to appear in/on)) about your story in order to raise awareness? Y/N - If yes, please fill in your contact details so Parkinson’s UK’s Media and PR Team can get in touch with you. - Please note that if you do speak to a journalist, you will have to use your full name and be willing to have your photograph published. I consent to you using this information as part of World Parkinson’s Day (Yes/No) At Parkinson’s UK, we want to be very clear about how we use, store and protect your personal data. You can read about this at [parkinsons.org.uk/privacy](http://parkinsons.org.uk/privacy). I’ve also attached a copy of the template for gathering stories. YOPD seminar—18 April in Leicester https://www.pdvision2020.com/ This seminar is specifically focusing on people with YOPD. It’s on Saturday 18th April in Leicester. There is a good line up of guest speakers and an opportunity to meet fellow YOPD’s. We wish Gary Giles and David Fewings the beat of luck as they are playing football on 19th April in Worcester for the South London 6-a-side team that are taking part in the Cure Parkinson’s Cup tourney. They have to raise £1000 to enter. See fundraising page: https://uk.virginmoneygiving.com/fundraiser-display/showROFundraiserPage?userUrl=ParkysaurusFC&isTeam=true Click on this to join Michael J Fox Research opportunity. As we know, Parkinson’s is very complex and we all need to get involved with Research - Here’s your opportunity Dear Friend, In 2018, Eric Aquino, an emergency medical technician (EMT), was diagnosed with Parkinson’s disease at age 40. Eric is proactive about his health and while searching for resources, he discovered Fox Insight, an online clinical study sponsored by The Michael J. Fox Foundation. Fox Insight gives a voice to thousands of people in the Parkinson’s community by gathering information from online surveys about health and symptoms over time. This data is then de-identified and made available to qualified researchers, who are hard at work searching for the answers that can lead to new therapies. “Our health can change in a matter of days or weeks. It's important for scientists to know how things are changing over time so they can develop better treatments,” Eric says. While Eric is an active Fox Insight participant, he also launched his own nonprofit, Gray Strong Foundation, after struggling to find local support in his community. And his weekly podcast, called Trembling EMT, reminds people with Parkinson’s they aren’t alone. You can join Eric and make a difference in Parkinson’s research. Fox Insight makes it easy to participate from anywhere. Register for Fox Insight today. PD Warrior is hosting an Instructor Course in February for health professionals and we are seeking volunteers to assist us for demonstration purposes. **February 8th 2020** **LONDON** Therapy Outpatient Department, 1st Floor, Albany Wing, The National Hospital for Neurology and Neurosurgery, Queen Square, WC1N3BG | Time | Activity | |--------|--------------------------------------------------------------------------| | 1.30pm | Assessment with your Instructor in Training | | 2.30pm | Treatment and circuit with your Instructor in Training | | 3.30pm | Afternoon tea and wrap up | In return for your time, we will be offering you a free assessment and treatment with a PD Warrior Instructor in training as well as The New Parkinson’s Treatment book, exercise circuit and taste of PD Warrior! If you would like to volunteer, please contact me directly on via email at email@example.com Thank you www.pdwarrior.com
CONNECTICUT DEPARTMENT OF CONSUMER PROTECTION BROKER REAL ESTATE LICENSING CANDIDATE INFORMATION BULLETIN Please refer to our website to check for the most updated information at https://test-takers.psiexams.com/ctre # TABLE OF CONTENTS - Introduction ........................................................................................................ 1 - Educational Requirements .................................................................................. 1 - Scheduling Procedures ........................................................................................ 1 - Examination Site Locations ............................................................................... 2 - Reporting to the Examination Site ..................................................................... 3 - Taking the Examination by Computer ............................................................... 4 - Score Reporting ................................................................................................. 4 - Description of Examinations & Examination Content Outlines .................... 5 - Examination Study Materials .............................................................................. 10 - Sample Questions .............................................................................................. 11 - Registration Form .............................................................................................. End of Bulletin Please direct all questions and requests for information about application processing and examinations to: **PSI Services LLC** 3210 E Tropicana Las Vegas, NV 89121 https://test-takers.psiexams.com/ctre (855) 746-8171 FAX (702) 932-2666 ● TDD (800) 735-2929 After you have completed your application and examination process, further questions may be directed to the: **Connecticut Department of Consumer Protection** License Services 450 Columbus Boulevard, Suite 801 Hartford, Connecticut 06103 Phone: (860) 713-6000 E-Mail: firstname.lastname@example.org Agency Web site: www.ct.gov/dcp INTRODUCTION This candidate licensing information bulletin provides information about the license examination and the application process for becoming licensed as a real estate broker in the State of Connecticut. To be licensed, you must: You can now fill out the application online at State of Connecticut Applications (psiexams.com). OR 1. Submit an application, an application fee, and the required documents to PSI licensure: certification (PSI). Once you have met the requirements, you will be issued an Examination Eligibility postcard. Note: This application can be found at end of this Candidate Information Bulletin. 2. Pass an examination to confirm that you have attained at least a minimum level of knowledge regarding the laws and regulations concerning the real estate profession. 3. Submit a license fee and the required documents to License Services. Payment of the license fee MUST be made within two (2) years of passing the last portion of the examination; otherwise a new application, along with the appropriate fee, must be submitted to PSI licensure:certification (PSI) in order to be eligible to retest. Once the Department has verified that you have met all of the requirements for licensure, they will issue the appropriate license. The Connecticut Department of Consumer Protection has contracted with PSI to conduct its examination program. PSI works closely with the Department to be certain that examinations meet local requirements and test development standards. EDUCATIONAL REQUIREMENTS The educational requirements for licensure as a Broker include: - At least 3 years of licensure as a Real Estate Salesperson; - Provide original certificates for the following courses: 60-hour Principles & Practices, 15-hour Legal Compliance, 15-hour Broker Principles & Practices, AND 2 -15 hour pre-license real estate electives. In lieu of the 2-15 hour electives: 30-hour Real Estate Appraisal course. Nonresident License Requirements A non-resident licensed Broker who has a valid license in his/her home state is eligible to become a real estate broker in Connecticut if the following rules are met: - Competency written examinations are required within the home state. - The home state allows licenses to be issued to residents of Connecticut without examination. - The licensed individual does not have any disciplinary proceedings or complaints. If these terms are not met, the applicant will then be required to pass the Connecticut portion of the real estate examination. A current list of Real Estate license holders and approved schools is located on State of Connecticut, Department of Consumer Protection public web site. This site is reflective of the internal system as DCP Real Estate issues further approvals to Licensees and providers of education. Please see License Verification Web Site for a current list at www.ct.gov/dcp, then click on “License Verification”. Alternatively, contact: LICENSE SERVICES Connecticut Department of Consumer Protection 450 Columbus Boulevard, Suite 801 Hartford, CT. 06103 Phone: 860-713-6000 Fax#: 860-713-7229 E-Mail: email@example.com Agency Web site: www.ct.gov/dcp SCHEDULING PROCEDURES All candidates for the Broker examinations must be pre-approved by PSI BEFORE you register for or schedule your Broker examination. There is no pre-approval needed for the Continuing Education examination. Upon approval by PSI, you will be sent an Examination Eligibility Postcard, including instructions for scheduling the examination. - You may take the examination on an unlimited basis for up to one year from the date of eligibility. - You must pass both portions of the examination within one (1) year of eligibility. - If you do not pass both portions within one year, you must reapply with the PSI. The following fee table lists the applicable fee for each examination. The fee is for each examination, whether you are taking the examination for the first time or repeating. | EXAMINATION FEE | $ 59 | |-----------------|------| | For first time testing, the fee is $59 for both examination portions, regardless if you take one or both examination portions. Examination retakes are $51 for both examination portions and $51 for one examination portion. NOTE: REGISTRATION FEES ARE NOT REFUNDABLE OR TRANSFERABLE. | INTERNET SCHEDULING For the fastest and most convenient examination scheduling process, PSI recommends that you register for your examinations using the Internet. You register online by accessing PSI’s registration website https://test-takers.psiexams.com/ctre. Internet registration is available 24 hours a day. Log onto PSI’s website and select Sign in / Create Account. Select Create Account. You are now ready to pay and schedule for the exam. Enter your zip code and a list of the testing sites closest to you will appear. Once you select the desired test site, available dates will appear. TELEPHONE SCHEDULING For telephone registration, you will need a valid credit card (VISA, MasterCard, American Express or Discover). PSI registrars, (855) 746-8171, are available Monday through Friday between 7:30 am and 10:00 pm, and Saturday-Sunday between 9:00 am and 5:30 pm, Eastern Time. CANCELING AN EXAMINATION APPOINTMENT You may cancel and reschedule an examination appointment without forfeiting your fee if your cancellation notice is received 2 days before the scheduled examination date. For example, for a Monday appointment, the cancellation notice would need to be received on the previous Saturday. You may call PSI at (855) 746-8171, or use the PSI website. Note: A voice mail message is NOT an acceptable form of cancellation. SCHEDULING A RE-EXAMINATION It is not possible to make a new examination appointment on the same day you have taken an examination; this is due to processing and reporting scores. A candidate who tests unsuccessfully on a Wednesday can call the next day, Thursday, and retest as soon as Friday, depending upon space availability. You may access a registration form at www.psiexams.com. You may also call PSI at (855) 746-8171. MISSED APPOINTMENT OR LATE CANCELLATION Your registration will be invalid, you will not be able to take the examination as scheduled, and you will forfeit your examination fee, if you: - Do not cancel your appointment 2 days before the schedule examination date; - Do not appear for your examination appointment; - Arrive after examination start time; - Do not present proper identification when you arrive for the examination. EXAM ACCOMMODATIONS All PSI examination centers are equipped to provide access in accordance with the Americans with Disabilities Act (ADA) of 1990, and exam accommodations will be made in meeting a candidate’s needs. A candidate with a disability or a candidate who would otherwise have difficulty taking the examination should request for alternative arrangements by Clicking Here. Candidates granted accommodation in accordance with the ADA, MUST schedule their examination by telephone and speak directly with a PSI registrar. EXAMINATION SITE CLOSING FOR AN EMERGENCY In the event that severe weather or another emergency forces the closure of an examination site on a scheduled examination date, your examination will be rescheduled. PSI personnel will attempt to contact you in this situation. However, you may check the status of your examination schedule by calling (855) 746-8171. Every effort will be made to reschedule your examination at a convenient time as soon as possible. You may also check our website at www.psiexams.com. EXAMINATION SITE LOCATIONS The PSI Real Estate Licensing examinations are administered at the examination centers listed below: West Hartford 1245 Farmington Ave, Suite 203 West Hartford, CT 06107 From I-84 West, take exit 40 toward CT-71/New Britain Ave/Carbins Corner. Turn right onto Ridgewood Rd. Turn left onto Wood Pont Rd. Turn left onto Tunxis Rd. Turn right onto Brookmoor Rd. Turn right onto Buena Vista Rd. Turn left onto Everett Ave. Turn right onto Farmington Ave. Destination is on the right. Milford 500 BIC Drive Suite 101 Milford, CT 06461 From Highway I-95 exit 35. Go toward BIC Drive. Go .5 miles to 500 BIC Drive which is at Gate 1 of the former BIC complex. Go to the rear of the lot and park. Walk down the hill in front of the building and enter the front door. Signs will direct you to Suite 101 (PSI). Auburn 48 Sword St., Unit 204 Auburn, MA 01501 From Southbridge St/MA-12, turn left onto Sword St. Boston 56 Roland St., Suite 305 Washington Crossing Charlestown, MA 02129 From the North: Take I-93 South. Exit 28 -Boston/Sullivan Sq./Charlestown. Merge into Mystic Ave. Take I-93S Ramp to Boston/Sullivan Sq./Charlestown (take ramp do not get on highway). Make slight left turn on to Maffa Way. Make slight right turn on to Cambridge Street. At first traffic light, make left on to Carter Street. Turn right on to Roland Street. End at 56 Roland. Enter through North lobby. Do NOT park in the building's parking lot. From the South: Take I-93 North. Exit 28 - Rt 99/Sullivan Sq./Somerville. Make left on to Cambridge St. At first traffic light, make left on to Carter Street. Turn right on to Roland Street. End at 56 Roland Street (Building on left. Parking lot on right). Enter through North lobby. Do NOT park in the building’s parking lot. Fall River 218 South Main St., Suite 105 Fall River, MA 02721 From the North take Rte. 24S to 79S. Take Route 138S Exit. Bear right off exit. Go left at first traffic light. Take left at the second traffic light (top of hill) onto So Main St. 218 is 2 blocks down on the right. Parking: Go past 218 SO Main to 2nd light. Take right. Take another right at next traffic light. Third St parking Garage is on your right. Springfield 1111 Elm Street, Suite 32A West Springfield, MA 01089 Going East on Mass Pike (Rt. 90). Take Exit 4 - West Springfield/Holyoke. Turn right on West Springfield/Rt. 5 South. Continue on Rt. 5 approximately two miles. Turn right on Elm St. - immediately after Showcase Cinemas. Office is approximately 1/4 mile on the right. Going West on Mass Pike (Rt. 90). Take Exit 4 - West Springfield/Holyoke. Follow as above. REPORTING TO THE EXAMINATION SITE On the day of the examination, you should arrive at least 30 minutes before your appointment. This extra time is for sign-in, identification, and familiarizing you with the examination process. If you arrive late, you may not be admitted to the examination site and you will forfeit your examination registration fee. REQUIRED IDENTIFICATION AT EXAMINATION SITE Candidates need to provide one (1) form of identification. Candidates must register for the exam with their LEGAL first and last name as it appears on their government issued identification. The required identification below must match the first and last name under which the candidate is registered. Candidates are required to bring one (1) form of a valid (non-expired) signature bearing identification to the test site. REQUIRED IDENTIFICATION (with photo) - Choose One  State issued driver’s license  State issued identification card  US Government Issued Passport  US Government Issued Military Identification Card  US Government Issued Alien Registration Card  Canadian Government Issued ID NOTE: ID must contain candidate's photo, be valid and unexpired. SECURITY PROCEDURES The following security procedures will apply during the examination:  Only non-programmable calculators that are silent, battery-operated, do not have paper tape printing capabilities, and do not have a keyboard containing the alphabet will be allowed in the examination site.  Candidates may take only approved items into the examination room.  All personal belongings of candidates should be placed in the secure storage provided at each site prior to entering the examination room. Personal belongings include, but are not limited to, the following items: - Electronic devices of any type, including cellular / mobile phones, recording devices, electronic watches, cameras, pagers, laptop computers, tablet computers (e.g., iPads), music players (e.g., iPods), smart watches, radios, or electronic games. - Bulky or loose clothing or coats that could be used to conceal recording devices or notes. For security purposes outerwear such as, but not limited to: open sweaters, cardigans, shawls, scarves, vests, jackets and coats are not permitted in the testing room. In the event you are asked to remove the outerwear, appropriate attire, such as a shirt or blouse should be worn underneath. - Hats or headgear not worn for religious reasons or as religious apparel, including hats, baseball caps, or visors. - Other personal items, including purses, notebooks, reference or reading material, briefcases, backpacks, wallets, pens, pencils, other writing devices, food, drinks, and good luck items.  Although secure storage for personal items is provided at the examination site for your convenience, PSI is not responsible for any damage, loss, or theft of any personal belongings or prohibited items brought to, stored at, or left behind at the examination site. PSI assumes no duty of care with respect to such items and makes no representation that the secure storage provided will be effective in protecting such items. If you leave any items at the examination site after your examination and do not claim them within 30 days, they will be disposed of or donated, at PSI’s sole discretion.  Person(s) accompanying an examination candidate may not wait in the examination center, inside the building or on the building’s property. This applies to guests of any nature, including drivers, children, friends, family, colleagues or instructors.  No smoking, eating, or drinking is allowed in the examination center.  During the check in process, all candidates will be asked if they possess any prohibited items. Candidates may also be asked to empty their pockets and turn them out for the proctor to ensure they are empty. The proctor may also ask candidates to lift up the ends of their sleeves and the bottoms of their pant legs to ensure that notes or recording devices are not being hidden there. - Proctors will also carefully inspect eyeglass frames, tie tacks, or any other apparel that could be used to harbor a recording device. Proctors will ask to inspect any such items in candidates' pockets. - If prohibited items are found during check-in, candidates shall put them in the provided secure storage or return these items to their vehicle. PSI will not be responsible for the security of any personal belongings or prohibited items. - Any candidate possessing prohibited items in the examination room shall immediately have his or her test results invalidated, and PSI shall notify the examination sponsor of the occurrence. - Any candidate seen giving or receiving assistance on an examination, found with unauthorized materials, or who violates any security regulations will be asked to surrender all examination materials and to leave the examination center. All such instances will be reported to the examination sponsor. - Copying or communicating examination content is violation of a candidate’s contract with PSI, and federal and state law. Either may result in the disqualification of examination results and may lead to legal action. - Once candidates have been seated and the examination begins, they may leave the examination room only to use the restroom, and only after obtaining permission from the proctor. Candidate will not receive extra time to complete the examination. **TEST QUESTION SCREEN** One question appears on the screen at a time. During the examination, minutes remaining will be displayed at the top of the screen and updated as you record your answers. IMPORTANT: After you have entered your responses, you will later be able to return to any question(s) and change your response, provided the examination time has not run out. **EXAMINATION REVIEW** PSI, in cooperation with the Connecticut Department of Consumer Protection, will be consistently evaluating the examinations being administered to ensure that the examinations accurately measure competency in the required knowledge areas. While taking the examination, examinees will have the opportunity to provide comments on any questions, by clicking the Comments link on the function bar of the test question screen. These comments will be analyzed by PSI examination development staff. PSI does not respond to individuals regarding these comments. All substantive comments are reviewed. This is the only review of examination materials available to candidates. **SCORE REPORTING** In order to pass the Broker examinations, you must receive a score of at least 75%. Your score will be given to you immediately following completion of the examination. The following summary describes the score reporting process: - **On screen** - your score will appear immediately on the computer screen. This will happen automatically at the end of the time allowed for the examination; - If you **pass**, you will immediately receive a successful notification. - If you **do not pass**, you will receive a diagnostic report indicating your strengths and weaknesses by examination type with the score report. - **On paper** - an unofficial score report will be printed at the examination site. **EXPERIMENTAL QUESTIONS** A small number of “experimental” questions (i.e., 5 to 10) may be administered to candidates during the examinations. These questions will not be scored and the time taken to answer them will not count against testing time. The administration of such unscored, experimental questions is an essential step in developing future licensing exams. LICENSE EXAMINATION PREPARATION The following suggestions will help you prepare for your examination. - Planned preparation increases your likelihood of passing. - Start with a current copy of this Candidate Information Bulletin and use the examination content outline as the basis of your study. - Read study materials that cover all the topics in the content outline. - Take notes on what you study. Putting information in writing helps you commit it to memory and it is also an excellent business practice. Discuss new terms or concepts as frequently as you can with colleagues. This will test your understanding and reinforce ideas. - Your studies will be most effective if you study frequently, for periods of about 45 to 60 minutes. Concentration tends to wander when you study for longer periods of time. Now you can take the practice exam online at National Real Estate Broker Practice Exam to prepare for your Connecticut Real Estate Examination. Please note that practice exams are intended only to help testing candidates become familiar with the general types of questions that will appear on a licensing examination. They ARE NOT a substitute for proper education and study. Furthermore, scoring well on the practice exam does not guarantee a positive outcome on an actual licensing examination. Note: You may take the practice exams an unlimited number of times; you will need to pay each time. DESCRIPTION OF EXAMINATIONS & EXAMINATION CONTENT OUTLINES The Examination Content Outlines have been approved by the Occupational and Professional Licensing. These outlines reflect the minimum knowledge required by the real estate professionals to perform their duties to the public in a competent and responsible manner. Changes in the examination content will be preceded by changes in these published examination content outlines. Use the outlines as the basis of your study. The outlines list all of the topics that are on the test and the number of items for each topic. Do not schedule your examination until you are familiar with all topics in the outlines. The Examination Summary Table below shows the number of questions and the time allowed for each exam portion. The examinations are closed book. | Exam | Portion | No. of Questions | Time Allowed | |------|---------|------------------|--------------| | | General | 75 (80 points) | 120 Minutes | | | State | 40 (40 points) | 60 Minutes | | | Both | 115 (120 points) | 180 Minutes | Note: National broker exams include questions that are scored up to two points. GENERAL PORTION CONTENT OUTLINE I. Property Ownership (Broker 10%) A. Real and personal property; conveyances B. Land characteristics and legal descriptions 1. Metes and bounds method of legal property description 2. Lot and block (recorded plat) method of legal property description 3. Government survey (rectangular survey) method of legal property description 4. Measuring structures (linear and square footage) 5. Land measurement C. Encumbrances and effects on property ownership 1. Types of liens and their effect on the title and value of real property 2. Easements, rights of way and licenses, including their effect on the title, value and use of real property 3. Encroachments and their effect on the title, value and use of real property 4. Potential encumbrances on title, such as probate, leases, or adverse possession 5. Property rights that may be conveyed separately from use of the land surface, such as mineral and other subsurface rights, air rights, or water rights D. Types of ownership 1. Ownership in severalty/sole ownership 2. Implications of ownership as tenants in common 3. Implications of ownership in joint tenancy 4. Forms of common-interest ownership, such as Timeshares, Condominiums and Co-ops 5. Property ownership held in a trust or by an estate 6. Ownership by business entities 7. Life Estate ownership II. Land use Controls (Broker 5%) A. Government rights in land 1. Government rights to impose property taxes and special assessments 2. Government rights to acquire land through eminent domain, condemnation and escheat B. Government controls on land use C. Private controls 1. Deed conditions or restrictions on property use 2. Subdivision covenants, conditions and restrictions (CC&Rs) on property use 3. Condominium and owners’ associations regulations or bylaws on property use III. Valuation (Broker 8%) A. Appraisals 1. Appraisals for valuation of real property 2. Situations which require appraisal by a licensed or certified appraiser and brokerage-related actions that constitute unauthorized appraisal practice 3. General steps in appraisal process B. Estimating Value 1. Economic principles and property characteristics that affect value of real property 2. Sales or market comparison approach to property valuation and appropriate uses 3. Cost approach to property valuation and appropriate uses 4. Income analysis approach to property valuation and appropriate uses C. Comparative Market Analysis (CMA) 1. Competitive/Comparative Market Analysis (CMA), BPO or equivalent 2. Automated Valuation Method (AVM), appraisal valuation and Comparative Market Analysis (CMA) IV. Financing (Broker 9%) A. Basic Concepts and Terminology 1. Loan financing (for example, points, LTV, PMI, interest, PITI) 2. General underwriting process (e.g., debt ratios, credit scoring and history) 3. Standard mortgage/deed of trust clauses and conditions 4. Essential elements of a promissory note B. Types of Loans 1. Conventional loans 2. Amortized loans, partially amortized (balloon) loans, interest-only loans 3. Adjustable-rate mortgage (ARM) loans 4. Government Loans a. FHA insured loans b. VA guaranteed loans c. USDA/Rural Development loan programs 5. Owner financing (for example, installment or land contract/contract for deed) 6. Reverse-mortgage loans 7. Home equity loans and lines of credit 8. Construction loans 9. Rehab loans 10. Bridge loans C. Financing and Lending 1. Real Estate Settlement Procedures Act (RESPA), including kickbacks 2. Truth-in-Lending Act (Regulation Z), including advertising 3. Requirements and time frames of TRID (TILA-RESPA Integrated Disclosures) 4. Equal Credit Opportunity Act 5. Lending Process (application through loan closing) 6. Risky loan features, such as prepayment penalties and balloon payments V. Contracts (Broker 19%) A. General Contract Law 1. General principles of contract law 2. Elements necessary for a contract to be valid 3. Effect of the Statute of Frauds 4. Offer and a contract 5. Enforceability of contracts 6. Void, voidable and unenforceable contracts 7. Bilateral and unilateral contracts 8. Nature and use of option agreements 9. Notice, delivery, acceptance and execution of contracts 10. Appropriate use, risks, and advantages of electronic signatures and paperless transactions 11. Rights and obligations of the parties to a contract 12. Possible remedies for breach or non-performance of contract 13. Termination, rescission and cancellation of contracts B. Purchase and Lease Contracts 1. Addenda and amendments to contracts 2. Purchase agreements 3. Contract contingencies and methods for satisfying them 4. Leases and rental agreements 5. Lease-purchase agreements 6. Types of leases C. Proper handling of multiple offers and counteroffers VI. Agency (Broker 13%) A. Agency and non-agency relationships 1. Agency relationships and how they are established 2. Types of listing contracts 3. Buyer brokerage/tenant representation contracts 4. Other brokerage relationships, including transaction brokers and facilitators 5. Powers of attorney and other assignments of authority 6. Conditions for termination of agency or brokerage service agreements B. Agent Duties 1. Fiduciary duties of agents 2. Agent’s duties to customers/non-clients, including honesty and good faith C. Agency Disclosures 1. Disclosure of agency/representation 2. Disclosure of possible conflict of interest or self-interest VII. Property Disclosures (Broker 7%) A. Property Condition 1. Seller’s property condition disclosure requirements 2. Property conditions that may warrant inspections or a survey 3. Red flags that warrant investigation of public or private land use controls B. Environmental and Government Disclosures 1. Environmental issues requiring disclosure 2. Federal, state, or local disclosure requirements regarding the property C. Disclosure of material facts and material defects VIII. Property Management (Broker 5%) A. Duties and Responsibilities 1. Procurement and qualification of prospective tenants 2. Fair housing and ADA compliance specific to property management 3. How to complete a market analysis to identify factors in setting rents or lease rates 4. Property manager responsibility for maintenance, improvements, reporting and risk management (BROKER ONLY) 5. Handling landlord and tenant funds; trust accounts, reports and disbursements (BROKER ONLY) 6. Provisions of property management contracts (BROKER ONLY) B. Landlord and tenant rights and obligations IX. Transfer of Title (Broker 6%) A. Types of deeds B. Title Insurance and Searches 1. Title insurance policies and title searches 2. Potential title problems and resolutions 3. Marketable and insurable title C. Closing Process 1. When transfer of ownership becomes effective 2. Process and importance of recordation 3. Settlement procedures (closing) and parties involved 4. Home and new construction warranties D. Special Processes 1. Special issues in transferring foreclosed properties 2. Special issues in short sale transactions 3. Special issues in probate transactions X. Practice of Real Estate (Broker 12%) A. Antidiscrimination 1. Federal Fair Housing Act general principles and exemptions 2. Protected classes under Federal Fair Housing Act 3. Protections against discrimination based on gender identity and sexual orientation 4. Prohibited conduct under Federal Fair Housing Act (Redlining, Blockbusting, Steering, Disparate Treatment) 5. Fair housing advertising rules 6. Americans with Disabilities Act (ADA) obligations pertaining to accessibility and reasonable accommodations B. Legislation and Regulations 1. Licensees’ status as employees or independent contractors 2. Antitrust laws and types of violations, fines and penalties 3. Do-Not-Call List rule compliance 4. Proper use of Social Media and Internet communication and advertising C. Duties and Responsibilities 1. Protection of confidential personal information (written, verbal or electronic) 2. Duties when handling funds of others in transactions 3. Licensee responsibility for due diligence in real estate transactions D. Supervisory Responsibilities (BROKER ONLY) 1. Broker’s supervisory responsibilities (licensees, teams and unlicensed assistants and employees) (BROKER ONLY) 2. Broker relationship with licensees (employees or independent contractors and governing rules) (BROKER ONLY) XI. Real Estate Calculations (Broker 6%) A. Calculations for Transactions 1. Seller’s net proceeds 2. Buyer funds needed at closing 3. Real property tax and other prorations 4. Real property transfer fees 5. PITI (Principal, Interest, Taxes and Insurance) payments estimate given loan rate and term B. General Concepts 1. Equity 2. Rate of return/Capitalization rate 3. Loan-to-Value ratio 4. Discount points and loan origination fees STATE PORTION CONTENT OUTLINE Connecticut Real Estate Commission and Licensing Requirements (Broker 5 items) a. Real Estate Commission purpose, powers and duties b. Activities requiring a license c. Exemptions from licensure d. License types and qualifications e. License renewal, continuing education, and transfer f. Real Estate Guaranty Fund g. License suspension and revocation Connecticut Laws Governing the Activities of Licensees (Broker 11 items) a. Broker/salesperson relationship b. Duties to parties c. Handling of deposits and other monies d. Misrepresentation e. Disclosure of nonmaterial facts f. Advertising g. Commissions and compensation h. Unlicensed personal assistants Connecticut Real Estate Agency (Broker 9 items) a. Agency: the representing of a client vs. working with a customer b. Agency agreements c. Agency disclosure d. Subagency limitations e. Dual agency f. Designated agency g. Confidential information h. Interference with agency relationship Connecticut-Specific Real Estate Principles and Practices (Broker 7 items) a. Connecticut-specific property ownership and transfer issues i. Co-ownership forms and shares ii. Adverse possession/prescriptive easement time iii. Land records and recording iv. Real property taxes and assessments v. Conveyance tax vi. Residential property condition disclosure b. Connecticut Landlord-Tenant Act c. Connecticut Common Interest Ownership Act d. Connecticut fair housing law e. Connecticut lead paint laws f. Connecticut disclosure of off-site conditions law g. Connecticut Uniform Electronic Transactions Act For Brokers Exam Only (Broker 8 items) a. Record keeping b. Escrow accounts c. Brokers lien d. Notice of commission rights in commercial transactions e. Cooperation with out-of-state brokers f. Interstate land sales g. Mortgage brokerage fees charged by brokers h. Real properties securities/syndication EXAMINATION STUDY MATERIALS GENERAL PORTION FOR BROKER The following is a list of possible study materials for the real estate examinations. The list is given to identify resources and does not constitute an endorsement by PSI or by the Occupational and Professional Licensing. Use the latest edition available. - Hart, Dearborn Real Estate Education, (800) 972-2220, www.dearborn.com - Modern Real Estate Practice, 19th Edition, Galaty, Allaway, and Kyle, Dearborn Real Estate Education, (800) 972-2220, www.dearborn.com - Real Estate Law, 9th Edition, 2016, Elliot Klayman, Dearborn Real Estate Education, (800) 972-2220, www.dearborn.com - The Language of Real Estate, 7th Edition, 2013, John Reilly, Dearborn Real Estate Education, (800) 972-2220, www.dearborn.com - Real Estate Principles & Practices, 9th Edition, 2014, Arlyne Geschwender, OnCourse Publishing, N19W24075 Riverwood Drive, Suite 200, Waukesha, WI 53188, 855-733-7239, www.oncoursepublishing.com ISBN 0324784554 - Real Estate Principles, 12th Edition, Charles Jacobus, OnCourse Publishing, N19W24075 Riverwood Drive, Suite 200, Waukesha, WI 53188, 855-733-7239, www.oncoursepublishing.com ISBN 1285420985 - Real Estate Math, 7th Edition, 2014, Linda L. Crawford, Dearborn Real Estate Education, (800)972-2220, www.dearborn.com - Property Management, 10th edition, 2016, Kyle, Robert C., Baird, Floyd M. and Kyle, C. Donald, Chicago: Dearborn Real Estate Education - Principles of Real Estate Practice, 5th edition, 2017, Mettling, Stephen and Cusic, David, Performance Programs Company, www.performanceprogramscomnpay.com STATE PORTION FOR BROKER - State of Connecticut, Real Estate Statutes and Regulations Concerning the Conduct of Real Estate Brokers and Salespersons, www.ct.gov/dcp. - Pancak, Katherine A., Connecticut Real Estate: Practice & Law, Dearborn Real Estate Education, (800) 972-2220, www.dearborn.com SAMPLE QUESTIONS The following questions are offered as examples of the types of questions you will be asked during the course of the National real estate salesperson and broker examinations. They are intended primarily to familiarize you with the style and format of questions you can expect to find in the examinations. The examples do NOT represent the full range of content or difficulty levels found in the actual examinations. SAMPLE SALESPERSON QUESTIONS A. Which of the following interests in property is held by a person who is granted a lifetime use of a property that will be transferred to a third party upon the death of the lifetime user? 1. A life estate. 2. A remainder estate. 3. An estate for years. 4. A reversionary estate. B. Which of the following statements BEST identifies the meaning of the term, “rescission of a contract”? 1. A ratification of a contract by all parties. 2. A return of all parties to their condition before the contract was executed. 3. A transfer or assignment of a particular responsibility from one of the parties to another. 4. A review of the contract by the legal counsel of either party that may result in a cancellation without penalty or further obligation. C. Which of the following clauses in a mortgage allows the lender to demand loan repayment if a borrower sells the property? 1. Defeasance 2. Prepayment 3. Acceleration 4. Alienation D. How much cash MUST a buyer furnish in addition to a $2,500 deposit if the lending institution grants a 90% loan on an $80,000 property? 1. $5,500. 2. $6,975. 3. $7,450. 4. None of the above. E. Which of the following single-family residences would get the MOST accurate appraisal by applying the reproduction cost approach to value? 1. A rental property. 2. A vacant property. 3. A new property. 4. An historic property. Answers to Sample Salesperson Questions: A: 1; B: 2; C: 4; D: 1; E: 4 SAMPLE BROKER QUESTIONS (SCENARIO-BASED) Scenario: You are hosting an open house. Mr. and Mrs. Charles Martin come into the house. You greet them and show them the house. The Martins tell you the house is exactly what they are looking for and they are very interested in purchasing it. You then give them information showing the various types of financing available with down payment options and projected payments. Mr. Martin tells you they have been working with Mary Hempstead of XX Realty, a competing real estate company. Before leaving, you thank them for coming and give them your business card. A. The first thing on Monday morning, Mrs. Martin calls and indicates they have tried to reach Mary and cannot. They indicate they have a written buyer’s agent agreement with Mary’s broker. They are afraid someone else is going to buy the house. Which of the following should you do? Select the best answer. 1. Seek advice from your supervising broker. 2. Tell them to come to your office. 3. Ask them to bring the buyer’s agency agreement to you for your interpretation. 4. Tell them to be patient and continue trying to reach Mary. 5. Tell them to call Mary’s supervising broker or branch manager. 6. Tell them you are really sorry, but there is nothing you can do. B. The Martins come to your office and explain that neither Mary nor her supervising broker are available. They insist you immediately write an offer for the house. How should you proceed? Select the best answer. 1. Write the offer after entering into a buyer’s broker agreement with them. 2. Write the offer after explaining they may owe Mary’s broker a commission. 3. Write the offer after trying to contact Mary’s broker yourself. 4. Refuse to write an offer and explain that doing so would be unethical. 5. Refuse to write and offer since it would be illegal. 6. Refuse to write the offer and tell the Martins to contact another Salesperson in Mary’s office. Answers (Points) to Sample Broker Questions: A. 1 (2 points), 2 (1 point), 3 (0 point), 4 (0 point), 5 (1 point), 6 (0 point); B. 1 (1 point), 2 (2 points), 3 (1 point), 4 (0 point), 5 (0 point), 6 (0 point); Real Estate Broker Application Instructions 1. This application must be completed and signed. The Federal Privacy Act of 1974 requires that you be notified that disclosure of your Social Security Number is required pursuant to CGS 17b-137a. If you do not disclose your Social Security Number, your application may not be processed. 2. Effective January 1, 2014, the only acceptable Principles & Practices of Real Estate course completion certificate will be that of an approved 60-hour course. (Two 30-hour courses are no longer accepted). 3. Provide original certificates for the following courses: 60-hour Real Estate Principles & Practices and 15-hour Legal Compliance and 15-hour Broker Principles & Practices. 4. Provide original certificates for the following: 2-15 or 1-30 hour pre-license real estate elective(s). In lieu of the pre-license elective course(s) only, 20 real estate transactions (legal transfer of property or lease agreement executed between a landlord and tenant) in the previous 5 years (use form attached). 5. Provide proof of no less than 1,500 hours of active salesperson experience and at least 4 real estate transactions closed in the three (3) previous years (use attached form). 6. A check and/or money order in the amount of $120.00 made payable to PSI Examination Services must accompany this application. Application fees are non-refundable. 7. After this application is reviewed and approved, you will receive an Examination Eligibility Postcard from PSI with instructions to register and schedule the examination. The examination fee will be due at the time you schedule the examination with PSI. MAIL this application, course certificates and fee to: PSI Examination Services 3210 East Tropicana Ave Las Vegas, NV 89121 For information and/or questions, contact PSI licensure:certification www.psiexams.com or (855)746-8171 Applicant Information | First Name | Middle Initial | Last Name | |------------|----------------|-----------| | Residence Street Address | City or Town | State | Zip Code | |--------------------------|--------------|-------|----------| | Telephone Number | Email Address | Social Security Number | Date of Birth | |------------------|---------------|------------------------|---------------| | Mailing Address (if different from above) | City or Town | State | Zip Code | |-------------------------------------------|--------------|-------|----------| 1. I acknowledge that I have completed the required coursework listed above and have been actively engaged for at least three (3) years as a licensed real estate salesperson under the supervision of a licensed real estate broker in this state. □ YES □ NO Please provide your real estate salesperson license number: RES # 2. Have you ever been convicted of a felony? □ YES □ NO If yes, provide the date(s) and nature of conviction, where the cases were decided, and a description of the circumstances relating to each conviction. 3. Have you ever been convicted of a crime including, but not limited to, forgery, embezzlement, obtaining money under false pretenses, extortion, criminal conspiracy to defraud, or any like offenses? □ YES □ NO If yes, provide the date(s), nature of conviction(s), where the cases were decided, and a description of the circumstances relating to each conviction. 4. Have you ever had a real estate license refused, suspended, or revoked in any State? □ YES □ NO If yes, please list details. Affirmation I, being duly sworn according to law, hereby affirm that the answers given in this application are true to the best of my knowledge and belief and that this application is made for the sole purpose of obtaining a real estate broker license. _________________________________________ _________________________ Signature of Applicant Date 1,500 HOURS OF ACTIVE EXPERIENCE AS AN ACTIVE REAL ESTATE SALESPERSON & 4 REAL ESTATE TRANSACTIONS COMPLETED IN THE PREVIOUS 3 YEARS Broker Applicant's Full Name: ____________________________________________________________ List 4 real estate transactions where the applicant represented at least one party in the legal transfer of property or lease agreement executed between a landlord and tenant | TYPE OF TRANSACTION (RENT/SALE) (RESIDENTIAL/COMMERCIAL) | PROPERTY ADDRESS | CLOSING DATE | SPONSORING BROKER WHO PAID YOUR COMMISSION TO YOU | |----------------------------------------------------------|-----------------|--------------|--------------------------------------------------| | | | | | | | | | | | | | | | | | | | | By signing below, you and the sponsoring broker affirm you completed no less than 1,500 hours of active real estate salesperson experience, including at least 4 real estate transactions closed in the previous 3 years. Applicant’s Signature _______________________________ Date _______________ Printed Name of Sponsoring Broker: _______________________________ CT Broker License #: _______________ Signature of Sponsoring Broker _______________________________ Date _______________ Before you begin... Do NOT register for the examination if you have NOT received a Eligibility postcard from PSI. Be sure to read the section titled “Examination Registration and Scheduling Procedures” before filling out this form. You must provide all information requested and submit the appropriate fees. PLEASE TYPE OR PRINT LEGIBLY. Registration forms that are incomplete, illegible, or not accompanied by the proper fee will be returned unprocessed. Registration fees are not refundable or transferable. 1. Name Last Name: ____________________________ Generation: ______ First Name: ____________________________ M.I: ______ 2. Social Security ______ - ______ - ______ (For Identification Purposes Only) 3. Mailing Address Number, Street: ____________________________ Apt. No: ______ City: ____________________________ State: ______ Zip Code: ______ 4. Email Address ______________________________________ @_____________________________________________________ 5. Telephone Cell: ______ - ______ - ______ Office: ______ - ______ - ______ 6. Birth Date ______ / ______ / ______ 7. Exam (Check One) □ Broker – General and State □ Continuing Education □ Broker – General Only □ Broker – State Only □ First Time ($59 for both examination portions/$59 for one examination portion) □ Retake ($51 for both examination portions/$51 for one examination portion) You are also responsible for paying the application fee of $120.00. Application fees are non-refundable. 8. Fee Enclosed: □ $59 + $120 □ $51 Payment of fees may be made by credit card, company check, personal check, money order or cashier’s check, made payable to PSI. Cash is NOT accepted. Check one: □ VISA □ MasterCard □ American Express □ Discover Card No: ___________________________________________ Exp. Date: ________________ The card verification number may be located on the back of the card (the last three digits on the signature strip) or on the front of the card (the four digits to the right and above the card account number). Card Verification No: ________________ Billing Street Address: ____________________________________________ Billing Zip Code: ________________ Cardholder Name (Print): __________________________________________ Signature: _________________________ 9. School Code ______ ______ ______ ______ PSI Services LLC 3210 E Tropicana Las Vegas, NV 89121
Website Advertisement SUB: SCHEME FOR EMPANELMENT OF RETIRED OFFICIALS (EROS) - OFFICERS OF SCALE I TO IV OF MAHARASHTRA GRAMIN BANK FOR INSPECTION OF BRANCHES. 1. Format of application for empanelment. 2. Eligibility Criteria & Method of applying. 3. Short listing of applicants. 4. Selection Committee and interview. 5. Period of Engagement/Review of performance/Termination of engagement. 6. Remuneration / T A. H A. 7. Reporting structure / role and responsibilities of EROS. 8. Accountability / Terms & conditions. 9. Letter of acceptance of terms and conditions. 10. Methodology for conducting concurrent audit. 11. Undertaking by the applicant. Format of application for Empanelment of retired-officers of Maharashtra Gramin Bank for Inspection of Branches. | | Name of Applicant | |---|------------------| | 1 | | | 2 | Staff No. | | 3 | Complete postal/ Communication address with city/ pin code. | | 4 | Landline/ Mobile No. | | 5 | E-mail address. | | 6 | Date of Birth | | 7 | Age as on 01.06.2018 | | 8 | Date of appointment in the Bank | | 9 | Date of Promotion to officer cadre | | 10 | Date of Superannuation/ Resignation | | 11 | Total Service in years | | 12 | Designation at the time of Retirement | | 13 | PAN No. (Mandatory) | | 14 | Branch Experience (in years) | | a. | Experience as BM (in years) | | 15 | Experience as in charge of Credit Department in RO/HO if any (in years) | | 16 | Experience in Inspection Department in Bank if any (in years) | | 17 | Want to work in Region (3 Preferences) | | | 1. | | | 2. | | | 3. | I undertake to work anywhere in the area of operation of bank, though opted to work in a particular region. I undertake to deposit Rs.50,000/- as security deposit and assign in favor of the Bank. I confirm that, I have read the terms and conditions of the appointment and abide by the same which is published in the Bank’s website. Date: Place: Signature ELIGIBILITY CRITERIA & METHOD OF APPLYING: a) Ex-Officers of our bank retired on Superannuation in Scale I to IV for conducting/assisting RBIA/other Inspections are eligible to apply for the empanelment. b) The age of the applicant shall not be more than 63 years as on 01/06/2018. c) Shall have experience of minimum of 20 years of service in our bank and should have good knowledge about the bank’s systems and procedures & Aptitude, analytical ability and flair to take up inspection assignment. d) Shall have fair knowledge of CBS and other software packages used by the bank and adequate computer knowledge including MS Office. e) The applicant shall have good track record and they should NOT have been imposed major penalty during the last 3 years prior to retirement. f) The candidate should NOT have been imposed with any punishment during their entire service for any misconduct which was treated as one attracting vigilance angle. g) Candidates should have physical fitness and should be able to travel to distant branches / places for Inspection and Security Verification. Physical fitness certificate from the Qualified Medical practitioner / Panel Doctor of the Bank / Government Doctor mentioned by the Bank should be submitted at the time of empanelment. h) The eligible candidates shall download the application from Bank's website www.mahagramin.in and submit the application through hard copy so as to reach The Chief Manager, Maharashtra Gramin Bank, Inspection Department, Head Office, Jeevanshree Plot no. 35, Sector G, Town center, CIDCO, Aurangabad - 431003 before the stipulated date i.e. 01/07/2018 i) The application should be for a particular Regional office and the applicant should be ready to do the audit work in any of the branches attached to that Regional office. j) The candidate can apply for more than one Regional office, if they so desire duly indicating the order of preference. SHORTLISTING OF APPLICANTS a) Depending on the number of applications, Chief Manager Inspection Department shall decide about the short listing of applications for interview. b) Short listing shall be done by Committee which shall consist of CGM, GM, Chief Manager: Inspection Department & Chief Manager: Staff Department. c) The committee shall short list the candidates for interview based on the following criteria: 1. Experience as Branch-in-charge or II line Manager for at least one term of 3 years. 2. Worked in Inspection, Special achievements during such assignments such as discovery of major income leakage, unearthing of frauds, whistle blowing of malpractices, etc. 3. Worked in Inspection follow up Section of Head office. 4. Worked in Advances/ Credit Department or related Departments in HO. 5. Exposure to Credit Appraisal / Risk Management. 6. Additional academic qualifications such as JAIIB / CAIIB, Certificate courses from IIBF on various topics, etc. 7. Geographical area & requirement of the Regional Office. 8. CM, Inspection Department will decide the number of shortlist candidates required for each empanelment and his decision shall be final. **SELECTION COMMITTEE AND INTERVIEW:** a) The selection committee shall interview the shortlisted applicant personally. b) The committee for selection of ERO’s shall consist of CGM, GM, Chief Manager Inspection Department and Chief Manager Staff Department. c) The decision of the selection Committee shall be final. d) Total marks for interview shall be 100 on the basis of the following traits; | Traits | Marks | |---------------------------------------------|-------| | Knowledge in the area of empanelment | 40 | | Knowledge of CBS / Computer systems and other packages | 20 | | Initiative / Analytical ability and innovation | 15 | | Communication and Team Spirit | 15 | | Leadership Quality | 10 | | **Total** | **100** | e) The minimum marks for selection shall be 50. **PERIOD OF ENGAGEMENT:** a) The services of EROS shall be availed, initially~ for a period of one year which may be renewed for a further period of one year. twice, at the sole discretion of the bank, subject to suitability / satisfactory services / annual assessment and overall performance of the ERO. Chairman shall authorize such renewals/extensions. b) The period of engagement of the services of the ERO shall normally for 3 years. Subject to attaining 66 years of age, whichever is earlier? **REVIEW OF PERFORMANCE:** a) The performance of all the EROS shall be evaluated / reviewed by CM inspection Department. b) Criteria for performance evaluation shall be (a) Quality of Reporting, (b) Mobility, (c) Promptness to accept assignments, (d) Timely completion of audit assignments, (e) Period of absence (f) satisfactory conduct. (g) Other grounds such as medical etc. **TERMINATION OF ENGAGEMENT:** a) The engagement / assignment shall be terminated automatically when ERO’s attains 66 years of age or on completion of 3 years of tenure, whichever is earlier. b) Chief Manager, Inspection Department shall be the authority for recommending of the discontinuation of the engagement or the termination of engagement of the services of EROs, if the performance is not satisfactory and the Chairman shall be the final authority to decide on the same. c) No further engagement/assignment of ERO’s shall be made if Bank comes to the notice of any misconduct, work not being done with due diligence as expected or performance is not found to be satisfactory on the recommendations of Chief Manager, Inspection Department. d) Bank reserves the right to de-panel any ERO at any time without notice and without assigning any reasons (a) in the event of getting any adverse reports / confidential opinion (b) any time when bank feels that its interest may be jeopardized, besides starting such appropriate action as bank deems fit. e) ERO may relinquish the assignment/empanelment (a) by giving 30 days notice or (b) by paying 50% of monthly remuneration to the bank. **REMUNERATION:** a) The ERO’s shall be eligible for a consolidated monthly remuneration depending on the scale as given below | Scale from which the ERO has retired | Consolidated Monthly Remuneration | |--------------------------------------|----------------------------------| | Scale I to III | Rs. 20,000/- | | Scale IV | Rs. 25,000/- | b) ERO’s shall not be eligible for any leave, other benefits, allowances or perquisites. c) ERO’s shall be eligible for remuneration for the intervening holidays provided they have worked on the preceding and succeeding working days. d) ERO’s shall be eligible for only pro-rata payment of the monthly remuneration under the following circumstances: (a) When ERO’s are not able to take up the assignments due to health grounds, etc. (b) When bank is not able to utilize the services of ERO for full calendar month due to administrative exigency. e) The remuneration shall be paid on monthly basis and shall be payable on the first working day of the succeeding month. **TA/ HA** a) RO’s shall not be eligible for any conveyance allowance/reimbursements if they are taking up assignment in the head quarters/units for which they are selected. b) ERO’s will be paid TA/HA applicable to serving officials of the same grade in which ERO’s have attained superannuation. c) No advance shall be permitted. d) Claims to be made on monthly basis to the Inspection Department. e) Sanctioning authority for TA/HA claims shall be Chief Manager, Inspection Department. **REPORTING STRUCTURE / ROLE AND RESPONSIBILITIES:** a) The ERO’s shall not be utilized for administrative work such as processing, assessing, gradation etc. b) ERO’s shall work under the close supervision of Management (Team leader/ Higher authorities) and the final sign off the RBIA reports would be responsibility of serving bank official. **ROLE AND RESPONSIBILITY OF ERO’S:** a) ERO’s assigned with RBIA/ any other audits shall assist in the regular Inspecting officials during RBIA of branches. b) Shall assist in verification/ inspection of Godowns/ securities/ other Assets during RBIA. c) Shall assist in inspection, security of loans (from pre-sanction stage to monitoring and follow up) during RBIA of Branches. d) Shall assist in informing Inspection department immediately in the event of any serious irregularities / frauds observed during inspection. e) Shall assist in any other inspection assignments entrusted by Inspection Department. f) Shall assist in conduct of KYC/AML snap audit etc in branches. g) Shall assist in conduct of Income Audit. h) Shall assist in off-site audit in RO/ HO. **ACCOUNTABILITY** a) The EROS shall be accountable for any act/s of omissions and commissions in their work during the course of any type of Inspection. b) The EROS empanelment contract may be terminated in the event of such omissions and commissions apart from lodging complaints with appropriate law enforcement agencies depending on the action of criminality / fraud / Breach of trust, etc. **TERMS AND CONDITIONS:** a) The applicants shall appear for a personal interview at Head Office at their own cost. b) Selection of candidates for empanelment will be at the sole discretion of the management. c) The engagement of retired officials in the Bank shall be on contract basis. d) All the selected candidates shall sign a contract containing terms and conditions of empanelment and make a security deposit of Rs.50,000/- (Rupees Fifty Thousand Only) in the form of term deposit assigned in favour of the Bank. The amount of deposit is refundable at the time of their leaving/discharge from their services. The Bank shall have the right to forfeit the deposit in case of any laxities/irregularities found during discharge of duties which likely to cause loss to the Bank or considered as committed with the malafide intention. e) The engaged retired officials shall not be eligible for reimbursement of medical or any other benefits/perquisites, festival advance, etc during the engagement period. f) The ERO’s are required to update their knowledge by going through the Circulars/Communications and instructions of the Bank. g) They shall not exercise any administrative/financial powers during the period of engagement. h) The engaged officials shall not accept any assignment with any other organization during the period of their contractual service in the Bank. i) The contractual period shall not be reckoned as service for the purpose of superannuation benefits/PF/Bonus etc. j) Income Tax or any other tax liabilities on remuneration shall be deducted as per prevailing rate(s) mentioned in the Income Tax Rules. k) The engaged officials shall follow the normal working hours as applicable to serving officials. l) In order to avoid conflict of interest, the retired personnel so engaged shall not be assigned branches/Offices where they had worked while in active service with bank. m) The candidate should be prepared to undertake inspection work of any branch coming under the jurisdiction of Regional Office to which they have applied or any other branch considering the administrative needs. n) The allotted job should be completed within the allotted man days and no remuneration/allowance shall be paid for the exceeded man-days unless permitted by Chief Manager, Inspection Department. o) Empanelled officers shall not be eligible for any leave facility as available to the serving officers. Yours faithfully, [Signature] Chief General Manager.
Adsorption of oxygen molecules on individual single-wall carbon nanotubes A. Tchernatinsky, S. Desai, G. U. Sumanasekera, C. S. Jayanthi, and S. Y. Wu\textsuperscript{a)} Department of Physics, University of Louisville, Louisville, Kentucky 40292 B. Nagabhirava and B. Alphenaar Department of Electrical and Computer Engineering, University of Louisville, Louisville, Kentucky 40292 (Received 27 January 2005; accepted 5 December 2005; published online 6 February 2006) Our study of the adsorption of oxygen molecules on individual semiconducting single-walled carbon nanotubes at ambient conditions reveals that the adsorption is physisorption, the resistance without O$_2$ increases by approximately two orders of magnitude as compared to that with O$_2$, and the sensitive response is due to the pinning of the Fermi level near the top of the valence band of the tube, resulting from impurity states of O$_2$ appearing above the valence band. © 2006 American Institute of Physics. [DOI: 10.1063/1.2163008] I. INTRODUCTION Interest in gas adsorption by carbon nanotubes at ambient conditions has been spurred by the demonstrations of the potential of single-walled carbon nanotube (SWCNT)-based gas sensors,\textsuperscript{1–3} specifically, the establishment of the interdependence between gas adsorption and transport properties of carbon nanotubes (CNTs). In recent years, experimental studies on the adsorption of oxygen molecules by SWNT bundles or mats included the measurements of electrical resistance and thermoelectric power,\textsuperscript{2,3} the effect of adsorption of O$_2$ on the barrier of metal-semiconductor contact,\textsuperscript{4,5} and the kinetics of O$_2$ adsorption and desorption.\textsuperscript{6} The picture emerged from these studies relevant to gas sensing indicates that the electrical resistance changes by about 15% between gassing and degassing,\textsuperscript{2} that the hole doping of semiconducting single-walled nanotube (\textit{s}-SWNT) in air is by the adsorption of O$_2$ in the bulk of \textit{s}-SWNTs (Ref. 5) rather than at the contact,\textsuperscript{4} and that the adsorption of O$_2$ has the characteristics of physisorption.\textsuperscript{6} Theoretical investigations of the adsorption of O$_2$ on SWNTs have also been carried out, using spin-unpolarized as well as spin-polarized density-functional theory (DFT) methods.\textsuperscript{7–12} Studies of the adsorption of O$_2$ on small-diameter (8,0) SWNT based on the spin-unpolarized DFT within the local-density approximation (LDA) predicted a weak hybridization between states of O$_2$ and those of the \textit{s}-SWNT with an estimated charge transfer of $\sim 0.1e$,\textsuperscript{7,9} leading to a binding of O$_2$ at a distance less than 3 Å from the \textit{s}-SWNT. The hole doping of the \textit{s}-SWNT was attributed to the pinning of the Fermi level at the top of the valence band due to the adsorption of O$_2$. With the O$_2$ molecule having a triplet ground state, the more realistic calculations based on the spin-polarized gradient-corrected DFT,\textsuperscript{8,10,12} on the other hand, yielded a very weak bonding at $\sim 4$ Å with no significant charge transfer, indicating that an O$_2$ molecule in the more stable triplet state is only physisorbed on a \textit{s}-SWNT. For the triplet state of O$_2$ adsorbed on the (8,0) SWNT, two degenerated $pp\pi^*$ bands were found to split into four bands, with the two unoccupied $pp\pi^*(\downarrow)$ bands rising $\sim 0.35$ eV above the top of the valence band at the $\Gamma$ point,\textsuperscript{12} casting some doubt about the hole-doping picture deduced from the unpolarized calculation. In order to obtain a coherent and consistent picture of the adsorption of O$_2$ by individual \textit{s}-SWNTs, we have conducted a careful experimental and theoretical investigation of the adsorption of O$_2$ molecule by individual SWNTs to shed light on the nature of adsorption and its effect on the transport properties of SWNTs. Experimentally, contacts were made to a few very dispersed SWNTs using \textit{e}-beam lithography. The experiment was first conducted under ambient conditions in air (room temperature and atmosphere pressure). The resistance was monitored during each exposure to air and subsequent pumping ($10^{-6}$ Torr). A resistance change of more than one order of magnitude was observed as a result of the adsorption of O$_2$ by \textit{individual} \textit{s}-SWNTs, in dramatic contrast to a mere 15% change observed for SWNT bundles or mats. Furthermore, the onset of the change in resistance was in minutes. These observations clearly demonstrated the feasibility of constructing \textit{s}-SWNT-based chemical sensors. To be more consistent with the experimental result, we have carried out a study on the adsorption of an O$_2$ molecule by a larger SWNT than the one considered in previous studies, the (14,0) SWNT, that is closer to the range of diameters in the experiment, using the spin-polarized DFT method. Our calculation using the spin-polarized generalized gradient approximation (GGA) yielded a shallow potential well with depth of the order of $\sim 0.05$ eV at $\sim 3.6$ Å from the surface of the SWNT, consistent with the picture of physisorption. We have determined the pinning of the Fermi energy due to the impurity level associated with O$_2$. Our estimate of the resistance based on the result of the (14,0) tube with the adsorption of the O$_2$ molecule is in excellent agreement with the observed initial resistance in air, indicating the metallization of the \textit{s}-SWNT by hole doping associated with the physisorbed O$_2$. We have also predicted a change in the resistance about two orders of magnitude between gassing and degassing. II. EXPERIMENTAL RESULTS Individual SWNTs were synthesized using chemical vapor deposition (CVD) with an Fe catalyst and CH$_4$ on a SiO$_2$/Si substrate with prepatterned grid marks. Silicon (100) with a thin oxide layer (0.4 $\mu$m) was selected for the growth process. The grid pattern (Au alignment marks) was fabricated on the SiO$_2$/Si substrate using $e$-beam lithography and etched using a basic oxide etch (BOE). The alignment marks were etched so that they can be seen in atomic force microscopy (AFM) imaging. The preparation of the catalyst solution follows the procedure given in Ref. 13. Fe nanoparticles were dispersed on the substrate from Fe(NO$_3$)$_3$ propanol solution. After washing with hexane, the substrate was loaded into the CVD reactor and heated to 900 °C in flowing Ar/H$_2$ [100 SCCM (standard cubic centimeter per minute) of 10% H$_2$ in Ar]. After reduction at 900 °C for 10 min, methane, the carbon feed, was introduced at a rate of 400 SCCM for 2 min. The sample was cooled in argon. The SWNT samples were imaged using the AFM with reference to the alignment marks in the grid pattern and the Au/Ti contacts were made on the SWNTs using $e$-beam lithography and evaporation. Larger contact pads were deposited on the $e$-beam-defined contacts using optical lithography (see Fig. 1). The device was loaded into a quartz reactor equipped with a turbo-molecular pump capable of evacuating to $10^{-7}$ Torr for \textit{in situ} studies. The reactor has provisions for gases and chemical vapors. The experiment was first conducted under ambient conditions (room temperature and atmosphere pressure). The two-probe resistance of the device was measured during the exposure to air and subsequent pumping at room temperature. The resistance was continuously monitored during each exposure and subsequent pumping. Figure 2 shows the time evolution of the two-probe resistance of the device during pumping and subsequent exposure. The data for two cycles are shown. The two-terminal resistance of the as-prepared device was $\sim 300$ k$\Omega$. During pumping ($\leq 10^{-6}$ Torr), the resistance started to increase and eventually saturated at a value of $\sim 16$ M$\Omega$ within a period of $\sim 1$ h. This represents a change of the resistance of close to two orders of magnitudes for \textit{individual} SWNTs, a dramatic change in comparison with the $\sim 15\%$ change observed for SWNT bundles or mats.\textsuperscript{2} When exposed to air at this point, the resistance started to decrease, initially with an abrupt drop to $\sim 2.5$ M$\Omega$ within $\sim 15$ min. This substantial drop in the resistance within such a short time interval after the exposure to air indicates the sensitivity of the response of individual SWNTs to the absorption of gases in air. The initial drop in resistance was followed by a much slower decrease, saturating at the initial value of $\sim 300$ k$\Omega$ in $\sim 10$ h. A similar behavior was observed in another cycle as shown in Fig. 2. The experimental findings suggest that the fabricated device is most likely composed of $s$-SWNTs and that the findings reflect the response of the transport properties of $s$-SWNT during the exposure to O$_2$ in air and subsequent pumping. We established the semiconducting nature of our device by measuring the gate voltage dependence of the conductance of the device at room temperature, using a Si substrate as the back gate. We found that when the positive gate voltage is increased, the conductance decreases, while conductance increases when the negative gate voltage is increased. As the conductance of metallic tubes should have little or no gate voltage dependence, and on the other hand, an increasing negative gate voltage adds more holes to $p$-type $s$-SWINTs, thereby increasing the conductance, we conclude that our device consists of only $s$-SWNTs with the $p$-type behavior when exposed to air. This conclusion is consistent with the measurement of a positive thermoelectric power in the case of the adsorption of O$_2$ molecules by SWNT bundles reported in Ref. 3, which indicates a $p$-type behavior for $s$-SWNTs with the adsorption of O$_2$ molecules. In Fig. 3 we show the gate voltage ($V_g$) dependence of $I_{ds}$ (for $V_{ds}=300$ mV) before and after the removal of air. The degenerately doped Si substrate was used as the back gate. The data were collected after the resistance reached the saturation values for both increasing and decreasing gate voltages. The $I_{ds}$ vs $V_g$ characteristics clearly show that the air-doped SWNTs (corresponding to the lowest resistance value) behave as a $p$-type semiconductor, i.e., they are on for a negative gate bias. It shows some ambipolar properties as the device is not completely off for a positive gate bias, typical for most SWNT-based field-effect transistors (FETs). After pumping, its electronic character changes to $n$ type (corresponding to the highest resistance value) as it exhibits a complete off state for a positive gate bias. The hysteresis observed in $I_{ds}$ for decreasing and increasing gate voltages has been observed in most of the SWNT-based FETs and interpreted as due to trapped electrons. Most interestingly, the threshold voltage (marked in Fig. 3 by a downward arrow) for both $p$- and $n$-type devices is essentially the same. This is a conclusive indication that there is no charge transfer between the adsorbed O$_2$ molecules and the $s$-SWNT, which is confirmed by our theoretical calculations, to be presented in Sec. III. III. THEORETICAL ANALYSIS To shed light on the physics underlying the change in the transport properties during the adsorption and desorption of O$_2$ molecules by $s$-SWNTs, we carried out a detailed study of the adsorption of an O$_2$ molecule on a (14,0) $s$-SWNT, using spin-polarized LDA as well as spin-polarized GGA DFT methods in the Vienna $ab$ initio simulation package (VASP).\textsuperscript{14–16} We chose to use the (14,0) SWNT as the benchmark because its diameter ($d=1.10$ nm) is close to the range of diameters of typical SWNTs and a recent DFT calculation has established the $1/d$ dependence of the energy gap of $s$-SWNTs to be valid only for $d \geqslant 1.0$ nm.\textsuperscript{17} In our calculation, we used a supercell of size $26 \times 26 \times 8.54$ Å to cut down the potential image effect between SWNTs. Along the axial direction of the SWNT, this supercell consists of two SWNT unit cells so that the calculation reflects well the situation of the physisorption of individual O$_2$ molecules. Vanderbilt’s ultrasoft pseudopotential\textsuperscript{18,19} and the Perdew-Zunger functional,\textsuperscript{20} with the GGA correction of Perdew et al.,\textsuperscript{21} were used for the self-consistent spin-polarized solution. The energy cut off was set at 700 eV. Monkhorst-Pack scheme with $1 \times 1 \times 11$ $k$-point mesh was used for sampling the Brillouin zone. Full optimization of the structural configuration of SWNT+O$_2$ and the lattice constants were carried out using the conjugate gradient method with the energy convergence of $10^{-5}$ eV and forces $\leqslant 10^{-2}$ eV/Å. Our calculations, using spin-unrestricted LDA as well as GGA, confirmed that the triplet O$_2$ state has a lower energy as compared with the singlet state. Before using the VASP code to investigate the benchmark case of the (14,0) $s$-SWNT+O$_2$, we applied in to the case of the (8,0) $s$-SWNT+O$_2$ with O$_2$ near the $T$ site.\textsuperscript{12} The optimization yields a result in excellent agreement with the corresponding result in Ref. 12 [see Fig. 5(g) in Ref. 12]. Having established the validity of the VASP code, we carried out optimizations of the adsorption of O$_2$ on the (14,0) $s$-SWNT with spin-polarized methods (LDA and GGA). We found the binding to be the strongest for the triplet O$_2$ molecule near the top of two adjacent zigzag bonds ($T$ site), with the molecular axis perpendicular to the axial direction of the SWNT (see Fig. 4, left panel). The relaxed bonding geometries (bond length and equilibrium orientation of O$_2$) from both methods are almost the same except for the equilibrium distance from O$_2$ to the surfaces of the (14,0) SWNT. Figure 4 (right panel) shows a weak potential well of depth $\sim 0.1$ eV at a distance of $\sim 3.0$ Å for the LDA result and a very shallow well of $\sim 0.03$ eV at a distance of $\sim 3.5$ Å for the GGA result. These results are consistent with the scenario of physisorption. For physisorption characterized by weak interactions, LDA tends to overestimate the binding and underestimate the equilibrium distance, while GGA tends to underestimate the binding. From our result, one can conclude that the physisorption of O$_2$ on the (14,0) $s$-SWNT is characterized by a potential well of depth between 0.03 and 0.1 eV and an equilibrium distance between 3.0 and 3.5 Å. Figure 5 shows the band structures in the vicinity of the energy gap of relaxed configurations of the adsorption of triplet O$_2$ on the surface of the (14,0) $s$-SWNT obtained by LDA and GGA, respectively. The energy gap obtained by the GGA calculation is $\sim 0.69$ eV, while that by LDA is $\sim 0.60$ eV. We have also checked the band structure for the pristine (14,0) $s$-SWNT using the same methods. We found the same values for the gap and no difference in the band structures as compared with those for the case of (14,0) +O$_2$ by the respective methods. Furthermore, the unoccupied oxygen $pp\pi^*(\downarrow)$ bands were found to appear within the gap of the $s$-SWNT almost dispersionless in both calculations. These results present an unambiguous indication of a very weak interaction between the oxygen molecule and the (14,0) $s$-SWNT, reinforcing the scenario of the physisorption. In this sense, our calculations have essentially established the placement of the empty impurity bands within the gap of the $s$-SWNT. Specifically, for the GGA calculation, the lower $pp\pi^*(\downarrow)$ band is $\sim 0.20$ eV above the top of the valence band, while that for the LDA is $\sim 0.24$ eV above the top of the valence band. To summarize, our study indicates no charge transfer between O$_2$ and the (14,0) $s$-SWNT. The effect of the presence of the oxygen impurity bands is to pin the Fermi level to the vicinity of the top of the valence band. The conductance for the pristine $s$-SWNT and that for the $s$-SWNT with O$_2$ under ambient conditions can be estimated according to $$G = \frac{2e^2}{h} \int_{-\infty}^{\infty} T(E) \left( -\frac{\partial f}{\partial E} \right) dE \approx G_0 \left( \frac{2}{1 + e^{\Delta/2kT}} \right),$$ where $T(E)$ is the transmission coefficient as a function of $E$ and may be approximated by 2 in the vicinity of the Fermi energy for SWNTs, $f(E)$ is the Fermi distribution function, $G_0=4e^2/h$ is the quantum conductance, and $\Delta$ is the energy gap. Using Eq. (1) based on the energy gap with or without O$_2$ obtained by LDA as well as GGA, we have calculated the resistances of the (14,0) $s$-SWNT with or without O$_2$ at room temperature. The results are shown in Table I. It can be seen that the GGA method yields a value of $\sim 400$ k$\Omega$ for the resistance with O$_2$, in very good agreement with the experimental result, while the LDA method gives rise to a value of $\sim 680$ k$\Omega$. For the (14,0) $s$-SWNT, the GGA method leads to a resistance increase by a factor of $\sim 1.33 \times 10^4$ between the resistance of the $s$-SWNT without O$_2$ and that with O$_2$, while the LDA leads to an increase in resistance by a factor of $1.08 \times 10^3$. This resistance change can be attributed to the pinning of the Fermi level to the vicinity of the valence band due to the presence of the empty oxygen bands. Since the diameter of a typical SWNT is $\sim 1.40$ nm and the gap follows a $1/d$ dependence on the diameter for $d \geq 1$ nm, we estimated the energy gap of the typical $s$-SWNT using the calculated gap of the (14,0) SWNT ($d=1.09$ nm) according to $\Delta_{\text{adj}} = \Delta \times 1.09/1.4$. Using $\Delta_{\text{adj}}$, we obtained a resistance increase by a factor of 82 for the LDA result and 652 for the GGA result, consistent with the experimental result. ### IV. DISCUSSION Our experimental study and theoretical analysis clearly lead to the following conclusion concerning the adsorption and desorption of O$_2$ molecules on $s$-SWNTs: (i) The resistance change between the desorption and absorption of O$_2$ molecules by an individual $s$-SWNT is approximately two orders of magnitude. The response of individual $s$-SWNTs to the exposure to O$_2$ molecules is therefore far more sensitive as compared with the response of SWNT bundles or mats studied previously. (ii) The adsorption of O$_2$ molecules on $s$-SWNTs is unequivocally physisorption. There is no charge transfer between the O$_2$ molecules and the $s$-SWNT. (iii) The sensitive response of $s$-SWNTs to the adsorption of O$_2$ molecules is due to the pinning of the Fermi level near the top of the valence band. | | $R$ w/o O$_2$ (k$\Omega$) | $R$ with O$_2$ (k$\Omega$) | Ratio | $R$ w/o O$_2$ (k$\Omega$) adjusted | Ratio adjusted | |----------------|---------------------------|---------------------------|---------|-----------------------------------|----------------| | LDA | $7.4 \times 10^5$ | $6.8 \times 10^2$ | $1.08 \times 10^3$ | $5.6 \times 10^4$ | $8.20 \times 10^1$ | | GGA | $5.2 \times 10^6$ | $4.0 \times 10^2$ | $1.33 \times 10^4$ | $2.6 \times 10^5$ | $6.52 \times 10^2$ | ACKNOWLEDGMENTS We would like to acknowledge the support by the NSF (DMR-0112824 and ECS-0224114) and the DOE (DE-FG02-00ER4582). 1J. Kong, N. R. Franklin, C. Zhou, M. G. Chapline, S. Peng, K. Cho, and H. Dai Science 287, 622 (2000). 2P. G. Collins, K. Bradley, M. Ishigami, and A. Zettl, Science 287, 1801 (2000). 3C. K. W. Adu, G. U. Sumanasekera, B. K. Pradham, H. E. Romero, and P. C. Eklund, Chem. Phys. Lett. 337, 31 (2001). 4V. Derycke, R. Martel, J. Appenzeller, and Ph. Avouris, Appl. Phys. Lett. 80, 2773 (2002). 5M. Shim and G. P. Siddons, Appl. Phys. Lett. 83, 3564 (2003). 6H. Ulbricht, G. Moos, and T. Hertel, Phys. Rev. B 66, 075404 (2002). 7S.-H. Jhi, S. G. Louie, and M. L. Cohen, Phys. Rev. Lett. 85, 1710 (2000). 8D. C. Sorescu, K. D. Jordan, and Ph. Avouris, J. Phys. Chem. B 105, 11227 (2001). 9J. Zhao, A. Buldum, J. Han, and J. P. Lu, Nanotechnology 13, 195 (2002). 10P. Giannozzi, R. Car, and G. J. Scoles, Chem. Phys. 118, 1003 (2003). 11M. Grujicic, G. Cao, and R. Singh, Appl. Surf. Sci. 211, 166 (2003). 12S. Dag, O. Gülseren, T. Yildirim, and S. Ciraci, Phys. Rev. B 67, 165424 (2003). 13J. H. Hafner, C. L. Cheung, Th. Oosterkamp, and C. M. Lieber, J. Phys. Chem. B 105, 743 (2001). 14G. Kresse and J. Hafner, Phys. Rev. B 48, 13115 (1993). 15G. Kresse and J. Furthmüller, Phys. Rev. B 54, 11169 (1996). 16G. Kresse and J. Furthmüller, Comput. Mater. Sci. 6, 15 (1996). 17V. Zólyomi and J. Kürti, Phys. Rev. B 70, 085403 (2004). 18D. Vanderbilt, Phys. Rev. B 41, 7892 (1990). 19G. Kresse and J. Hafner, J. Phys.: Condens. Matter 6, 8245 (1994). 20J. P. Perdew and A. Zunger, Phys. Rev. B 23, 5048 (1981). 21J. P. Perdew, J. A. Chevary, S. H. Vosko, K. A. Jackson, R. D. Rederson, D. J. Singh, and C. Fiolhais, Phys. Rev. B 46, 6671 (1992).
The Commissioner Productivity Commission GPO Box 1428, Canberra City ACT 2601 Australia 1 December 2009 Contribution of the Not for Profit Sector Dear Commissioner, HSC & Company is a leading philanthropy and community investment strategy consulting firm. Much of our work involves operating as an intermediary at the intersection between the public, private and non profit sectors. As such we have an active interest in optimizing mechanisms that enhance philanthropy as well as individual and corporate community investment. Our firm also focuses on applying collaborative innovation to help address and solve systems level issues that hinder the progress of organisations wishing to contribute toward or participate in the delivery of social impact. We would like to congratulate the Productivity Commission on a well articulated, insight-rich draft research report. Our response to the draft findings focuses on specific topics and is based on our internal global research, insight from commercial engagements and discussions with members of the philanthropic, private and non profit sector leadership communities. In the interest of pragmatism and a desire to see practical implementation of key recommendations, our response also offers cautions and guidance on how the design of a outcome-focused, non profit sector ‘blueprint’ may be approached. In summary our response covers: 1. Building stronger, more effective relationships for the future Effective representation and leadership of the NFP sector to inform and collaborate with the Australian Government, intermediaries and the private sector 2. Sector Development Increased contribution by the NFP sector based on consistent input and outcome evaluation 3. Stimulating social investment Tangible benefits linked to removing roadblocks that prevent the establishment of sustainable social enterprises and fostering social innovation. HSC & Company welcomes the opportunity to work closely with Productivity Commission on these critical, sector-shaping issues. We welcome the opportunity to contribute further specific research input prior to the finalisation of your report. Regards, Phil Hayes-St Clair Chief Executive Officer RESPONSE TO THE PRODUCTIVITY COMMISSION DRAFT RESEARCH REPORT ON THE CONTRIBUTION OF THE NOT FOR PROFIT SECTOR 1 DECEMBER 2009 CONTENTS EXECUTIVE SUMMARY 4 BUILDING STRONGER, MORE EFFECTIVE RELATIONSHIPS FOR THE FUTURE 5 Effective representation and leadership of the NFP sector to inform and collaborate with the Australian Government, intermediaries and the private sector. SECTOR DEVELOPMENT 6 Increased contribution by the NFP sector based on consistent input and outcomes evaluation. STIMULATING SOCIAL INVESTMENT 7 Tangible benefits linked to removing roadblocks that prevent the establishment of sustainable social enterprises and fostering social innovation. APPENDIX A 8 L3C Hybrid corporate structure APPENDIX B 11 Aggregation Incentive Program EXECUTIVE SUMMARY HSC & Company is a leading philanthropy and community investment strategy consulting firm. Much of our work involves operating as an intermediary at the intersection between the public, private and not-for-profit (NFP) sectors. As such we have an active interest in optimizing mechanisms that enhance philanthropy as well as individual and corporate community investment. Our firm also focuses on applying collaborative innovation to help address and solve systems level issues that hinder the progress of organisations wishing to contribute toward or participate in the delivery of social impact. We would like to compliment the Productivity Commission on their efforts in researching and presenting the Contribution of the Not for Profit Sector draft research report. When considering proposed recommendations in the context of the advancements other developed nations have made in understanding and improving the contribution of their NFP sectors, we believe Australia is placed in a unique and advantageous position. Although the Australian NFP sector is subject to various historic systems and processes, the access to learnings and perspectives from international peers can expedite design and implementation of best practice in the NFP sector, its intermediaries and the way in which other sectors engage with the NFP sector. Our comments are made in our specialised capacity as an advisor on corporate and public sector community investment and in our role as philanthropy advisors to families and individuals. We recognise that whilst these two types of guidance are very different, they remain interconnected. That said, we reiterate Philanthropy Australia’s position\(^1\) that “the Productivity Commission recognise the special role of philanthropy as a separate, specific and key segment of the NFP sector”. In acknowledging and lending our support to the general direction of recommendations from this report, there is a need to prioritise and apply focus to selected items that will underpin tangible and positive change at the core of the NFP sector and at the intersection points where it engages other sectors. It is our view that implementation of three critical recommendations will lay the essential foundations that will result in: 1. **Building stronger, more effective relationships for the future** Effective representation and leadership of the NFP sector to inform and collaborate with the Australian Government, intermediaries and the private sector. 2. **Sector Development** Increased contribution by the NFP sector based on consistent input and outcome evaluation. 3. **Stimulating social investment** Tangible benefits linked to removing roadblocks that prevent the establishment of sustainable social enterprises and fostering social innovation. We have also included some independent strategic thinking relating to models that we believe will promote social innovation (Appendix A) and enhance NFP sector effectiveness as it related to duplication of effort by NFP organisations (Appendix B). In addition HSC & Company are prepared to share privately with the Productivity Commission proprietary tools and platforms designed to address and solve some of the challenges more broadly outlined in the draft research report. We look forward to working with the Productivity Commission and the Australian Government to advance and support the implementation of these recommendations. Our experience and learnings as an intermediary facilitating greater capital to the NFP sector positions us well to assist the Productivity Commission in further researching and validating key items prior to the issue of the final report. --- \(^1\) Philanthropy Australia 2009, Draft Submission to PC on Draft Report, Melbourne 1. BUILDING STRONGER, MORE EFFECTIVE RELATIONSHIPS FOR THE FUTURE Effective representation and leadership of the NFP sector to inform and collaborate with the Australian Government, intermediaries and the private sector | A. CONTEXT | |-------------| | In our experience we see a gradual blurring of the traditional sector divisions that historically meant the NFP sector operated in isolation from the public and private sectors. As collaborative efforts increase to improve the effectiveness of the NFP sector and enhance the unique impact the NFP sector can deliver, there is a need for ensure cross-sector, well supported leadership - that respects historic perspectives and which caters for the views and motivations of emerging leaders - is established to provide focus to the design and implementation of strategic blue prints and initiatives that will deliver positive, multi-sector results. | | B. RELEVANT DRAFT RECOMMENDATIONS | |----------------------------------| | 13.2 The Australian Government should establish an Office for NFP Sector Engagement within the Prime Minister’s portfolio, for an initial term of five years. The Office would support the Australian Government in its efforts to: ▶ implement sector regulatory and other reform and the implementation of the Government’s proposed compact with the not-for-profit sector ▶ promote the development and implementation of the proposed Information Development Plan ▶ initially fund and oversee the establishment of the proposed Centre for Community Service Effectiveness ▶ implement the proposed contracting reforms in government-funded services ▶ act as a catalyst for the promotion and funding by government agencies of social innovation programs ▶ facilitate stronger community and business collaboration. The Office should, through the relevant Minister, report publicly on an annual basis on its achievements. | | C. RATIONALE FOR IMPLEMENTING RECOMMENDATION | |---------------------------------------------| | Implementing this recommendation will provide the NFP sector, the Australian Government and the private sector with a leadership point that can be accountable for driving progress on key items and issues. To date the lack of leadership: ▶ has been a fundamental point of frustration for stakeholders within the NFP sector and adjoining sectors ▶ is likely to contributed significantly to a lack of progress in implementing recommendations from previous Australian Government reports including 1995 Industry Commission Inquiry, the 2001 Charities Definition Inquiry, and the 2008 Senate Economics Committee Inquiry into Disclosure Regimes (Baldwin C, 2009, Social Sector Reform: An Overview of Current Australian Government Initiatives, Centre for Social Impact – UNSW, Sydney) | | D. TIMELINE | |-------------| | 1. Detailed design by a cross sector leadership team - 12 months. Detailed design should include (but not be limited to) the development of: ▶ leadership protocols ▶ strategic blue prints and accompanying implementation schedules ▶ public reporting standards 2. Program of work to commence immediately after detailed design phase. | | E. SHORT TERM BENEFITS | |------------------------| | Appoint a well supported leadership group – comprising of cross sector, multi-generation leaders with access to the Prime Minister – will signal a commitment by the Australian Government to NFP sector participants seeking the necessary reform leadership. | | F. MEDIUM TO LONG TERM BENEFITS | |---------------------------------| | Office for NFP Sector Engagement is a key stakeholder to implementing other key recommendations (refer to Section H below). This office also will provide a central point of contact, information and collaboration to NFP peak bodies and other government agencies that to date has been unavailable. | | G. PRACTICAL ISSUES TO CONSIDER | |---------------------------------| | We understand the Australian government is unlikely to support an imposed NFP sector leadership structure. As such we encourage the Productivity Commission to consider options to include active participation of NFP sector and private sector professionals in the design and operation of an Office for NFP Sector Engagement. Furthermore much of the changes outlined in the Productivity Commissions report, if implemented, are likely to span a longer term time frame. As such, the perspectives of existing and emerging leaders within and engaged with the NFP sector in developing the way forward should be considered. | | H. OTHER RECOMMENDATIONS THAT CAN NOW BE IMPLEMENTED | |------------------------------------------------------| | DRAFT RECOMMENDATION 5.2 & 5.3 Improving comparability and usefulness of information collected DRAFT RECOMMENDATION 5.4 Improving evidence-based practice through better evaluation DRAFT RECOMMENDATION 6.2 & 6.3 Reducing unnecessary compliance burdens DRAFT RECOMMENDATION 6.4 Consolidating Commonwealth regulation and improving transparency DRAFT RECOMMENDATION 7.3 & 7.4 Improving the environment to support sector access to funding | 2. SECTOR DEVELOPMENT Increased contribution by the NFP sector based on consistent input and outcome evaluation | A. CONTEXT | |-------------| | The decade to 2009 has witnessed an increased global focus on the effectiveness, sustainability and efficiency of NFP organisations and non government organisations (NGO’s). This has largely resulted from humanitarian crisis events (e.g. 2004 Tsunami) and publications (e.g. Charities – How much of your donation is gobbled up by fundraising fees and expenses, Choice Magazine, 2008) which have highlighted a need for a more consistent understanding of how these organisations function and deliver outcomes consistent with their mission. In the same period an array of ‘self-regulator’ style platforms have been developed including Charity Navigator, Guidestar and GiveWell. Enjoying mixed success, the evolution of these platforms signals a desire to promote greater effectiveness and transparency of investment in and delivery of social impact. | | B. RELEVANT DRAFT RECOMMENDATIONS | |----------------------------------| | 5.3 To minimise compliance costs and maximise the value of data collected, Australian governments should agree to implement a reform agenda for reporting and evaluation requirements for not-for-profit organisations involved in the delivery of government funded services. This should: - commit to basing reporting and evaluation requirements in service delivery contracts on a common measurement framework (appropriately adapted to the specific circumstances of service delivery) - require expenditure (input) measures to be based on the Standard Chart of Accounts - ensure that information generated through performance evaluations are returned to service providers to enable appropriate learning to take place and for organisations to benchmark their performance - embody, where practicable, the principle of ‘report once, use often’ | | C. RATIONALE FOR IMPLEMENTING RECOMMENDATION | |---------------------------------------------| | The inputs and outputs of evaluation are of equal relevance and importance. INPUTS Notwithstanding other inputs, the need to have a consistent basis for accounting and financial management is critical to achieving useful and insightful ‘input’ evaluation. The Centre for Philanthropy and Non Profit Studies (CPNS) at the Queensland University of Technology has made material advancements in developing a fit for purpose (and well received) Standard Chart of Accounts for NFP organisations. A coordinated, incentive-based, national effort to implement this Standard Chart of Accounts will underpin a variety of future initiatives and reduce costs and effort associated with a diversity of current practices. OUTPUTS A common measurement framework is appropriate for Australian Government funded service delivery contracts. Incentivising NFP organisations and NGO’s that operate independently of these contracts (or are developing capability to participate in such contracts) to adopt other appropriate measurement frameworks including Social Return on Investment (SROI), Results-based accountability (RBA) or Logical framework (“log frame”) will help create the paradigm shift to an outcome focused NFP sector. NOTE: Implementing consistent evaluation should not be limited to organisations involved in the delivery of government services but extended to organisations granted DGR or TCC status. | | D. TIMELINE | |--------------| | INPUTS Leveraging existing experience: 1. Assemble a task group (overseen by the Office for NFP Sector Engagement) whose mission is to identify, outline a plan and begin addressing implementation challenges relating to Standard Chart of Accounts - 12 months 2. Implement and transition to a nation wide Standard Chart of Accounts* - 24 months OUTPUTS 1. Detailed design of common evaluation methodology (including other specific evaluation mechanisms) and related implementation schedule - 12 months 2. Implement and transition to a common evaluation methodology* - 24 to 36 months * This includes ongoing training and support to maximise medium and long term benefits relating of high standards of information quality. | | E. SHORT TERM BENEFITS | |------------------------| | A forecast move to a sensible approach to evaluation may encourage philanthropists to streamline and increase contributions to NFP organisations. | | F. MEDIUM TO LONG TERM BENEFITS | |---------------------------------| | Tangible understanding of the contribution of the NFP sector and decreased costs to NFP organisations and NGO’s. | | G. PRACTICAL ISSUES TO CONSIDER | |---------------------------------| | Evaluation providers (Section A above) have noted the challenges and significant investments linked to creating useful and useable technology solutions to meet large scale data capture and analysis needs. The Productivity Commission should note that whilst capture of this data may have been achieved, it is often difficult to mobilise this information for other uses (e.g. reporting and large scale grants management). | | H. OTHER RECOMMENDATIONS THAT CAN NOW BE IMPLEMENTED | |------------------------------------------------------| | DRAFT RECOMMENDATION 5.1 Measuring the contribution to the sector in the future DRAFT RECOMMENDATION 5.2 Improving comparability and usefulness of information collected DRAFT RECOMMENDATION 5.4 Improving evidence-based practice through better evaluation DRAFT RECOMMENDATION 6.2 Reducing unnecessary compliance burdens | 3. STIMULATING SOCIAL INVESTMENT Tangible benefits linked to removing roadblocks that prevent the establishment of sustainable social enterprises and fostering social innovation. | A. CONTEXT | |-------------| | Tax concessions (e.g. Deductible Gift Recipients – DGR, and Tax Concession Charity – TCC) and Private Ancillary Funds (formerly Prescribed Private Funds - PPFs) have been fundamental in promoting public and private philanthropy. Social enterprises (for profit and non profit) and the innovations they promote have difficulty accessing funding, particularly from philanthropic funds. These enterprises therefore try and secure tradition commercial or government funding often with limited success. | A recent innovation from the USA (also being investigated by the Singapore Government) is a hybrid corporate structure called an L3C which allows philanthropic foundations to invest in limited liability companies that have a social mission whilst generating profits. We believe implementing an L3C or similar structure in Australia will result in unlocking resources to fund sustainable social enterprises. Refer to Appendix A for more detailed information. | B. RELEVANT DRAFT RECOMMENDATIONS | |----------------------------------| | 7.4 The Australian Government should establish a joint working party made up of representatives of the not-for-profit sector, business, philanthropic and other government to explore obstacles to not-for-profits raising capital and evaluate appropriate options to enhance access to capital by the sector. | | C. RATIONALE FOR IMPLEMENTING RECOMMENDATION | |---------------------------------------------| | Although there is a desire in Australia to embrace social innovation and social enterprise, there is no vehicle to facilitate investment in social enterprise. | Much like in the USA, charitable foundations in Australia are required to disperse 5% of their assets per annum, why not allow them to contribute (and invest in) organisations that have a clear social mission and choose to operate profitably in order to be sustainable. A joint working party can further investigate the L3C as an option for implementation in Australia given the research that has already occurred. | D. TIMELINE | |--------------| | 1. Assemble a join working party (overseen by the Office for NFP Sector Engagement) to explore obstacles to NFP organisations and social entrepreneurs raising capital and evaluate options to enhance access capital by the sector - 12 months | | E. SHORT TERM BENEFITS | |------------------------| | Leveraging experience from the USA, existing legal structures can be used to trial and if successful, implement the L3C structure. | Access to capital to support social innovation and social enterprise (which NFP organisations can adopt as a means to diversify income sources). | F. MEDIUM TO LONG TERM BENEFITS | |---------------------------------| | Australia will realise the employment and social benefits delivered by an industry of self sustaining, social outcome focused organisations. | | G. PRACTICAL ISSUES TO CONSIDER | |---------------------------------| | Charitable foundations will require support in identifying suitable social enterprises and in doing so be prepared to accept the related risks of investing in a business. | | H. OTHER RECOMMENDATIONS THAT CAN NOW BE IMPLEMENTED | |------------------------------------------------------| | DRAFT RECOMMENDATION 9.1 Promoting social innovation | APPENDIX A - L3C Hybrid Corporate Structure In early 2009 HSC & Company identified the work of Robert Lang in founding and helping introduce into US law the L3C hybrid corporate structure. The principle features of the Low-profit Limited Liability Company (L3C) are that it: - Must have a socially beneficial purpose. - Is designed to let foundations make program-related investments (PRI) more easily. - Enables some investors can earn a market return and hold equity in the company. - Is not tax-exempt, and contributions to L3Cs are not tax-deductible. Lang, a US based philanthropic foundation chairman, recognised how existing corporate structures could be adjusted to create the L3C and used to as a vehicle that philanthropic foundations could invest in – to help create sustainable social impact focused organisations – as part of their annual grant making commitment. We have been collaborating with Robert Lang to better understand the L3C structure and how the L3C might be introduced into Australia. The next major milestone in this investigation involves participating in a law symposium in Vermont, USA in February 2010. We would welcome appropriate representation by officers of the Productivity Commission at this symposium. NOTE: Also refer to ‘Examples of L3Cs’ at the end of our response. ENGAGEMENT SUMMARY A conversation with Robert Lang (CEO L3C Advisors L3C, CEO of the Mary Elizabeth & Gordon B. Mannweiler Foundation and creator of the L3C) and Phil Hayes-St Clair (CEO, HSC & Company) Date: 11 September 2009 Topic: L3C hybrid business structure operating social enterprise SUMMARY A new business organizational structure that will give social enterprise a new operating and funding vehicle. A law creating L3Cs (Low-profit Limited Liability Company) as a variant form of LLC has been passed in the states of Illinois, Michigan, Vermont, Utah, Wyoming and the Ogalal Sioux and Crow Indian Nations has been passed that allows the organization of a new hybrid corporate structure (called L3C) for profit ventures that have a primary goal of achieving a socially beneficial purpose. L3C facts: - An L3C must have a socially beneficial purpose. - It’s designed to let foundations make program-related investments (PRI) more easily. - Some investors can earn a market return and hold equity in the company. - L3Cs are not tax-exempt, and contributions to L3Cs are not tax-deductible. 20 other states are considering the law. BACKGROUND Robert Lang created the concept based on logic that: - Charitable foundations in the USA were dispersing funds anyway, why not allow them to contribute (and invest in) organisations that have a clear social mission and choose to operate profitably in order to be sustainable. Under US law a PRI replaces a grant. Foundations are required to make grants equal to 5% of their assets every year. - Although there was an active desire to embrace such social enterprises, there was no vehicle to facilitate such a relationship. - An easy to use and understand vehicle like the (L3C) was something that charitable foundations could embrace. - There was no magic momentum to this, the development of the L3C concept has been 4 years in the making. APPROACH 1. A principle based journey - The less (new) law, the better – the Limited Liability Company (LLC) laws on the books in all 50 states already contained the necessary framework(s) and provide the equivalent of corporate protection to an organization that is otherwise structured in many ways like a partnership in which the governing document is a contract known as the operating agreement. - Think of the interaction between the charitable foundation(s) and the other investors and members as a partnership organized to operate a social enterprise which is self sustaining. - Don’t establish a new structure, leverage the existing and most common structure (Limited Liability Company - LLC). - Consider the legislative path of least resistance and therefore make the L3C concept ‘non partisan’, therefore position accordingly: - If conservative = Increase in free enterprise and decrease in government size - If liberal = more funding to social problems --- 2 For the purpose of this conversation a social enterprise is defined as a for profit entity that operates with a clear social mission and purpose. It is recognised that a social enterprise may also be a non profit organisation. Tranche the investment structure and leverage the program related investments to take the highest risk at the lowest return in order to provide opportunities for market rate investors to participate. The result expands the pool of potential dollars available for social investment and reduces demand on charitable dollars. 2. An innovative team with vested interest was assembled to drive the L3C concept - Robert Lang - Mark Owen an attorney who was previously Director of the Exempt Division of the Internal Revenue Service (IRS), currently a partner in Caplin & Drysdale in Washington, DC. - Arthur Wood - Social Finance Director for Ashoka. - Other major supporters include John Tyler, Secretary and General Counsel, the Marion Ewing Kaufmann Foundation in Kansas City (The largest US charitable foundation actively supporting entrepreneurs) and Steve Gunderson, CEO, The Council On Foundations. The community development staff of the Federal Reserve Bank have also become active supporters. Key foreign supporters or those interested in the L3C work include: - Penny Low - Member of Singapore Parliament - Joseph Anderson - Partner in Morrison & Foerster, Singapore - Paul Martin former PM of Canada - Stephen Lloyd - UK - author of the UK CIC law Many others were also involved. 3. Choke points were identified early on: - Identification of suitable L3C’s by charitable foundations could be an issue. - Entrepreneurs (social and otherwise) seeking to establish an L3C have to find balance between making profit and doing good – this can be a challenge – this requires an appropriate integration of business planning with social ambition. 4. Creating momentum is key - The first state to write the L3C into law was Vermont (Under US law since LLCs were legal in all states and since the L3C was a type of LLC the passage in Vermont made L3Cs legal in all the states.). - There are approximately 80 – 90 L3C’s already established. NEXT STEPS - Other countries including Singapore have started investigating how an L3C can be introduced. HSC & Company will begin conversations with key stakeholders in Singapore in November 2009. - On February 18 & 19, 2010 the Vermont Law School Symposium will host an international panel on the subject and where hybrids or where similar L3C type organizations exist or hope to exist in other countries. Tentative attendees include Stephen Lloyd (author of the British CIC law) and Paul Martin (former PM of Canada). There is the opportunity for an appropriate party from Australia to participate. - A short outline of several L3C’s currently being created is being compiled and will be distributed shortly. APPENDIX B - Aggregation Incentive Program HSC & Company has been exploring concepts relating to how similar Australian non profit sector organisations (NPO’s) may be encouraged to better coordinate and leverage more efficient operating models to become more sustainable, increase delivery of social impact and reduce duplication of effort. This information brief is designed to present our initial thinking. INFORMATION BRIEF Topic: Preliminary thinking on Non Profit Sector focused Aggregation Incentive Programs (AIP) Date: 7 October 2009 BACKGROUND HSC & Company has been exploring concepts relating to how similar Australian non profit sector organisations (NPO’s) may be encouraged to better coordinate and leverage more efficient operating models to become more sustainable, increase delivery of social impact and reduce duplication of effort. This information brief is designed to present our initial thinking. CONTEXT - There is a growing appetite by: - NPO’s to become sustainable – key part of that equation is reducing cost whilst increasing revenue - Funders to support aggregation to reduce duplication, improve efficiencies and increase the (eventual social) impact of their funding - Government does not have an approach to aggregating NPO’s largely due to the lack of data profile on duplication - Merger’s have been achieved (and documented) in the non profit sector – largely driven by boards - Key challenges include vested interests’, lack of strategy and deficient infrastructure to support aggregation - Corporate organisations (e.g., Westpac) beginning to explore ‘shared services’ platforms for NPO core capabilities like fundraising - Few intermediary service firms have the required expertise or orientation to support aggregation AGGREGATION MODELS An aggregation model is a concept with an objective to streamline operations and consolidate similar functions for strategic benefit. Given the diversity of organisations operating in the Australian non profit sector it is unlikely that a ‘one size fits all’ approach exists. Due to an absence of contemporary market forces the take up of these models will require incentives (noted below). Aggregation models can take the form of structural and/or virtual 1. Shared Services Areas along the value chain where scale benefits can be gained without losing non profit identity and founder control e.g. Accounting, HR, receiving, governance, advertising, investment vehicles, fundraising infrastructure 2. Joint Ventures (JV) JV for delivery of services where NPO’s come together to fulfill the needs of a community or cause and to access funding 3. Mergers Create scale efficiencies where interests and intent compliment 4. Consolidation Coordinated rationalisation of federated NPO’s 5. Umbrella structures for disaster response Establish scale response partnerships to meet demands beyond capability of any one entity – forerunner then to mergers and JV’s INCENTIVES | CARROT | STICK | |----------------------------------------------------------------------|----------------------------------------------------------------------| | 1. Remove barriers e.g. taxes, streamline administration to support set up Incentives for NPO participation – financial set up assistance, positive discrimination regarding milestones and accreditations | 1. Set governance and sustainable entity hurdles for future compliance | | 2. Publish principles and guidelines for good practice for NPOs using ‘if not, why not’ approach | 2. Lock non participants out of some markets | | 3. Put in place protections for participating entities e.g. circumstances that trigger right to withdraw, customer data rights | 3. Skew funding towards desired portfolio reshape | | 4. Incentives for corporate organisations that have developed technologies and capabilities that could apply to participate e.g. Wealth platform managers, mortgage aggregators, outsourcing companies. | 4. Increase transparency of metrics and benchmarks | | 5. Support intermediaries with tailored development programs | | OUTSTANDING CONSIDERATIONS - Governance model and ownership structure of any shared services entities - Key factors for success - Key metrics and how to measure - Transaction structures - Role of parties – Gov’t/NPO associations/NPO leadership - What could derail progress END Background The L³C is now legal in all 50 states as a result of legislation signed into law in Vermont in April 2008, Michigan in Jan. 2009, the Crow Indian Nation in Jan. 2009, Wyoming in Feb. 2009, Utah in March 2009, the Oglala Sioux in July 2009 and Illinois in August 2009. A Vermont, Wyoming, Utah, Illinois or Michigan L³C, like a Delaware corporation, can be used anywhere. The L³C is pending in some form in Missouri, Arkansas, Montana, Oregon, Washington State, North Carolina, Maine, Massachusetts, Ohio, Tennessee, California, Colorado, Kentucky, Virginia, North Dakota, Florida, and Georgia. The following L³Cs have either been formed or are in process. We know very little about many of the ones already formed because they have not been in touch with us. The ones listed that are still being formed are ones we are working with. L³Cs Already Formed Allegheny Greenworks (Pittsburgh PA) Consults with nonprofit organizations and companies "on green enterprises and program development." Green Omega (Vergennes, VT) Works with other organizations on justice issues relevant to bringing victims and offenders together to try as much as possible to correct wrongs caused by the crimes. Farm Fresh for ME (Bangor, ME) An intermediary to connect small family farms with consumer buying clubs through an online ordering system. Hemp Amalgamated (Montpelier, VT) Promotes a better understanding of hemp and its potential uses for medicine, food, fabric, etc. Maine's Own Organic Milk Company (Augusta, ME) Created to organize family-run organic dairy farms and provide for the processing, marketing, and distribution of their milk and eventually to create other organic milk products. ParentRise (Austin, TX) Works with single parents and their children to provide educational and other support primarily through a Web site. Radiant Hen Publishing (Orleans, VT) Publishes books for children and adults that "encourage kindness to all living things" while helping promising authors and artists. Sporting Philanthropy (Denver, CO) Created to aid professional athletes plan and carry out their charitable giving. Zirgoflex (Norwich, VT) Software developer that operates the Web site of OpenMuseum.org, a program of a charity called Heritance that "allows people who like museums, art, and culture to visit exhibits online and get to know other people who also like and visit museums." L³Cs Being Formed The Montana Food Bank Network is creating a new company Endless Sky L³C to operate the Endless Sky Food Processing facility in Deer Lodge, Montana. The company will produce and market a retail and commercial line of fresh iconic food products. The revenue from this operation will finance the operations of the entire company which will also process food for the food banks all over Montana. Organized as an L³C, it will place mission above profit to insure that the shelves of the food banks are well stocked. **Endless Opportunities (Deer Lodge MT)** The Endless Sky facility will be located in a new industrial park. It became apparent in early planning that the entire park lacked tenants and had environmental challenges. We are now working with the state and town to organize the park as an L³C to be constructed and managed by Endless Opportunities L³C. The goal will be to expand upon the concept of the Endless Sky L³C food packing facility. We tentatively plan to have an animal processing facility which will handle all types of animals but specialize in hogs. That facility will also handle the processing of both domestic and game meat available to the Montana Food Bank Network. Although Endless Sky will help the small farmers of Montana, they are restricted by both kinds of crops and length of growing season. Endless Sky, of course intends to be a year round operation with a wide variety of products. We are going to have a state of the art greenhouse from Home Town Farms, L3C. This facility will permit the sale by Endless Sky of a variety of products grown and packed fresh year round. It will also improve the ability of the Food Bank to deliver fresh vegetables and fruits to its clients year round. The fourth component of the park will be a biomass waste processing and energy generating facility that can utilize the waste and garbage from the park, the prison and the city. The plant will produce gas that will be used to power generators for electricity, heat will be used in the greenhouses and buildings of the park and the resulting compost will be sold to farmers. Waste water may be used in additional greenhouses to grow a special oil producing algae for biodiesel and flowers for the cut flower market. **Blue Earth Bistros (Atlanta, GA)** Blue Earth Bistros will be a new concept in cafes for college campuses. Created to be pop up stores located in common areas and other convenient locations on college campuses they will be served from a central kitchen in a region. The central kitchen will be staffed with individuals who are disadvantaged or challenged in some way and the bistros themselves will be connected to each other worldwide via special internet connections designed to facilitate better understanding among different cultures. **YouPharma (San Diego, CA)** YouPharma will engage in the discovery and development of novel therapeutics for global unmet medical needs, using the power of social responsibility. As such, YouPharma will not be competing with existing players, but rather addressing those healthcare needs not well served by the current marketplace and its participants. It will do so by creating a PRI fund for foundations and will invest in the high risk proof of concept stage of product development. **Home Town Farms (San Diego CA)** This is a new state of the art concept in indoor, organic, urban farming especially designed to operate facilities of all sizes to create high end food opportunities for disadvantaged areas and populations. It targets a zero waste, zero carbon environment. **Summary** These reflect only a small number of the existing and proposed L³Cs as of October 2009. We are in the process of creating an organization for L³Cs which we hope will not only assist them as they pioneer in this new area but will permit us to provide potential L³C users with more information about the path already taken by the first few hundred. *L³C Advisors L³C is the first L³C created in the world and was organized to help others organize and finance L³C's.* Social Impact Financial Network Social Impact Development Group Education And Marketing Group Social Enterprise Management Group PO Box 236, Granite Springs, New York 10527 1-914-248-8443 firstname.lastname@example.org. www.americansforcommunitydevelopment.org
A METHOD FOR CONTENT-BASED SEARCHING OF 3D MODEL DATABASES Jiale Wang\textsuperscript{1*}, Hongming Cai\textsuperscript{2} and Yuanjun He\textsuperscript{1} \textsuperscript{1}Department of Computer Science & Technology, Shanghai Jiaotong University, China Email: firstname.lastname@example.org \textsuperscript{2} School of Software, Shanghai Jiaotong University, China ABSTRACT With the development of computer graphics and digitalizing technologies, 3D model databases are becoming ubiquitous. This paper presents a method for content-based searching for similar 3D models in databases. To assess the similarity between 3D models, shape feature information of models must be extracted and compared. We propose a new 3D shape feature extraction algorithm. Experimental results show that the proposed method achieves good retrieval performance with short computation time. Keywords: Content-based retrieval, Information retrieval, 3D shape feature extraction 1 INTRODUCTION Nowadays, 3D model databases have emerged in many applications, such as mechanical engineering, medical visualization, virtual reality, and computer animation. To facilitate the management of these databases, we need a way to search and retrieve 3D models. Traditional methods for searching multimedia data use attached information, such as textual annotation. Finding a 3D model by textual keywords suffers from the following problems: - Text descriptions may be inaccurate, incorrect, ambiguous, or in a different language. - Many 3D models may not have attached text annotation. Manually annotating them is a tedious and error-incurring work. - It is hard for users to query by using just text to describe a complex 3D shape. For these reasons, it is necessary to develop a method for content-based 3D model searching. Content-based searching means that a search system can find 3D models similar to a query model automatically. The search system provides a “query-by-example” query interface. Users can submit an example model as a query, and then the search system returns models according to their shape similarity. The key problem with content-based 3D model searching is how to extract the shape feature information from 3D models effectively. Shape feature information is usually expressed as vectors (shape feature vectors). The similarity among 3D models can be measured by computing the distance between shape feature vectors under a predefined metric. Figure 1 illustrates the workflow of a search system for 3D model databases. Figure 1. Workflow of a searching system for 3D model database This paper proposes a new shape feature extraction algorithm, which is an enhanced version of the traditional D2 shape distribution. We implemented a demonstration instruction 3D model search system based on the proposed method. Experiments show that the performance of the system is satisfactory. The rest of the paper is organized as follows: some related work is summarized in Section 2. In Section 3, we discuss the problem of a traditional D2 approach and describe our method in detail. Section 4 presents a demonstration instruction 3D model search system based on our method with experimental results. Finally, conclusions are given in Section 5. 2 RELATED WORK In recent years, researchers have proposed many shape feature extraction algorithms. These algorithms can be generally classified into two groups: variant and invariant (Tangelder & Veltkamp, 2004). Because a 3D model may have an arbitrary position, size, or pose in 3D space, when using a variant shape feature, models need to be normalized for translation, scale, and rotation before assessing shape similarity. Studies have found that the traditional methods for translation and scale normalization provide good search results, but methods for rotation normalization are not robust. The PCA (Principal Component Analysis) is a commonly used method for rotation normalization. In the literature (Funkhouser et al., 2003), it is found that some similar 3D models have different principal axes defined by PCA. An invariant shape feature describes any transformation of a shape in the same way and doesn’t need normalization. Because of the lack of an effective method for rotation normalization, how to extract a great deal of rotation-invariant shape features attracts great interest. Osada et al. (2002) proposed a rotation invariant shape feature extraction algorithm called D2 shape distribution. D2 describes the shape feature of a 3D model by calculating the distribution of distances between random points on the surface. D2 has many good properties, such as quickness in computation and concise storage. Experiments by Osada et al. demonstrate that D2 can effectively discriminate grossly dissimilar models. However, it is possible that dissimilar models may have very similar D2 histograms. To reduce this problem, this paper proposes a new D2 shape distribution that works on 3D models represented as voxel models. 3 SHAPE FEATURE EXTRACTION Osada’s D2 algorithm works on 3D mesh models. A mesh model defines a 3D object by describing its surface. There are two main steps in Osada’s approach: 1) generating random points on the surface of a 3D mesh, and 2) calculating distances between every pair of points and forming a histogram of these distances (see Figure 2). In a D2 histogram, the horizontal coordinate denotes the distance between points, and the vertical coordinate denotes the calculated frequency of the D2 distance distribution function. For a given distance $d$ and a point set $P$, the D2 distance distribution function at value $d$ can be expressed by the following equation: $$D2(d) = \frac{|\{\forall p, q \in P \ s.t. \ \|p - q\| = d\}|}{|P|^2}$$ \hspace{1cm} (1) where $\|p - q\|$ denotes the Euclidean distance between $p$ and $q$, and $|\bullet|$ is the cardinality of a set. Because Osada’s D2 approach only uses a simple distance distribution to describe 3D shape, it is possible that dissimilar models have similar D2 histograms. We can see in Figure 3 that a table and a car have very similar D2 histograms. To improve the discriminability of D2, we can enrich the distance information by differentiating the inner and outer line segments between a pair of sampling points. This idea is illustrated in Figure 4, where $P_iP_j$ is the line segment between two sampling points $P_i$ and $P_j$. $P_iG$ is the part of $P_iP_j$ that is inside the model. The distribution of the ratio $\frac{|P_iG|}{|P_iP_j|}$ can be used as a supplement to help the D2 algorithm filter out dissimilar models. We denote the distribution of this ratio as DIR. Figure 5 illustrates the DIR of the table and car from Figure 3. We can see that the two DIRs are obviously different. The DIR of the table indicates there are many low ratios because the table is generally a concave model and many line segments lie outside it. In contrast, the car is rather convex, so there are a large number of line segments inside it, and its DIR is dominated by high ratios. Generally DIR provides additional shape information than provided by D2. We can use the DIR to filter out dissimilar models that D2 is unable to distinguish. The dissimilarity between 3D models $A$ and $B$ can be measured by a weighted sum of D2 and DIR: $$Dist(A, B) = \frac{w_1 D2(A, B) + w_2 DIR(A, B)}{w_1 + w_2}$$ (2) where $w_1$ and $w_2$ are weights for D2 and DIR. In the later of this section, we will discuss how to determine these weights through user feedback. $D2(A, B)$ and $DIR(A, B)$ compute the $L_1$ norm of the histograms of D2 and DIR respectively: $$D2(A, B) = |HistogramD2_A - HistogramD2_B|$$ (3) $$DIR(A, B) = |HistogramDIR_A - HistogramDIR_B|$$ (4) Suppose that a histogram contains $n$ bins: $h = \{v_1, v_2, ..., v_n\}$. The $L_1$ norm of histograms is computed as the following equation: $$|h_a - h_b| = \sum_{i=1}^{n} |v_{ai} - v_{bi}|$$ (5) To facilitate computing the length of inner and outer line segments, we convert 3D models into a voxel model. The voxel model is a type of 3D data that uses volume elements (voxels) to represent an object in discrete 3D space (Kaufman et al., 1993). A voxel is a cubic unit of volume that can be seen as the 3D counterpart of the 2D pixel representing a unit of area. Mesh models just define the surface of objects. Thus it is difficult to determine which part of a line is inside or outside the models. On the contrary, a voxel model is a 3D solid, of which the inner and outer are explicitly defined. It is easy to determine which part of a line is inside or outside on voxel model. For surface-based models in 3D continuous space, such as mesh or B-rep, the computer graphics community has proposed algorithms that can convert them to voxel models very quickly by exploiting the hardware acceleration of display adapters. Figure 6 illustrates a 3D mesh model and its corresponding voxel model. The voxel model also provides a way to accelerate the calculation of DIR. When calculating the length of an inner line segment, one needs to repeatedly calculate whether the line intersects with a voxel. To accelerate the line-voxel intersection calculation, we can employ the octree-based spatial subdivision. Octree (Samet, 1990) is a hierarchical representation that subdivides the full voxel space into octants. When testing the intersection between the line and voxels, the octree structure can reduce the computational complexity significantly by searching through the voxel space hierarchically. In Equation (2), we use a weighted sum to combine two metrics D2 and DIR into a unified metric. To determine the weights $w_1$ and $w_2$, we use the information from Relevance Feedback (RF) to estimate them. RF makes the search process an interaction between the computer and user. For an RF-based search process, the system first retrieves similar models and returns them to the user. Then, the user provides feedback regarding the relevance of some of the retrieval results (the user marks the relevant models in the results and submits them back to the search system). Finally, the system uses the feedback information to improve the performance in the next iteration (Baeza-Yates et al., 1999, Rui et al., 1998). $w_1$ and $w_2$ should respectively reflect the effectiveness of D2 and DIR in retrieval. The more tightly a metric makes the known relevant objects distribute in its feature space, the more effective it is. Suppose that $q$ denotes a query and $R$ is the set of relevant models marked by user in the initial retrieval results. $w_1$ and $w_2$ are estimated by the following equations: $$w_t = \frac{1}{1 + \sum_{i=1}^{n+1} \maxdis(r_i^t)}$$ $$\maxdis(r_i^t) = \max_{r_j^t \in R} (D(r_i^t, r_j^t))$$ $t = 1, 2 \quad R = \{r_1, r_2, \ldots, r_n\} \quad R' = \{q\} \cup R = \{r'_1, r'_2, \ldots, r'_{n+1}\}$ For a given element $r_i^t$, $\maxdis$ is the maximum of distances between $r_i^t$ and all other elements in $R'$ under the metric $D$. $D$ denotes the metric D2 or DIR (when $t=1$, $D$ is D2; when $t=2$, $D$ is DIR). The sum of $\maxdis$ over all elements in $R'$ reflects the tightness of known relevant objects. A smaller sum indicates a tighter set and vice versa. In our tests, the initial values of $w_1$ and $w_2$ were both set to 0.5. Then we use over 200 queries and corresponding feedback sets to estimate heuristically the weights $w_1$ and $w_2$. We obtain that $w_1 = 0.63$ and $w_2 = 0.37$. 4 EXPERIMENTAL RESULTS We use the PSB (Princeton Shape Benchmark) (Shilane et al., 2004) to evaluate the performance of the proposed method. We compare the proposed method with three other rotation invariant shape feature extraction algorithms: Osada’s D2, shell histogram (Ankerst et al., 1999), and sphere harmonics (Kazhdan et al, 2003). We employ the “precision-recall” curve method (Raghavan et al., 1989) to measure retrieval performance. “Precision” measures the ability of the system to retrieve only models that are relevant, and “recall” measures the ability of the system to retrieve all models that are relevant. Let $C$ be the number of relevant models in the database (namely the number of models in the class to which the query belongs). Let $N$ be the number of relevant models that are actually retrieved in the top $A$ retrievals. Then, precision and recall are defined as follows: $$\text{precision} = \frac{N}{A}, \quad \text{recall} = \frac{N}{C} \quad (7)$$ There is a trade-off between precision and recall. In recall-precision diagrams, a perfect retrieval result would produce a horizontal line at the top of the plot. Otherwise, a curve closer to the upper-right corner represents better performance. ![Figure 7. Precision-recall plots](image) Figure 7 shows the precision-recall plots of the four tested methods. We can see that the precision of the proposed method is close to that of sphere harmonics and is obviously better than that of the other two. Figure 8 illustrates some search results returned by our method. The model in the green box (upper left corner) is the query model. Models in blue boxes are correctly retrieved models, and models in red boxes are the mismatched ones. Figure 8. Search results of the proposed system (a) results of a query model of human body (b) results of a query model of sword Table 1 lists the average computation time of the four methods. The computation time of the proposed enhanced D2 method is much less than that of sphere harmonics and is only a little more than that of Osada’s D2 approach. Table 1. Average computation time | | Shell histogram | Osada’s D2 | Our method | Sphere harmonics | |----------------|-----------------|------------|------------|------------------| | **Times** | 2.02 | 4.11 | 4.83 | 7.02 | 5 CONCLUSION In this paper, we present a method for searching 3D model databases. A new 3D shape feature extraction algorithm is proposed, based on the voxel representation of 3D models. We implement a demo instruction search system based on the proposed shape feature extraction algorithm. Experiments show that the system achieves a good performance. 6 ACKNOWLEDGEMENTS This work is supported by the National Natural Science Foundation of China (No. 60603080) and Aeronautical Science Foundation of China (No. 2007ZG57012) 7 REFERENCES Ankerst, M., Kastenmuller, G., Kriegel, H.P., & Seidl, T. (1999) 3D shape histograms for similarity search and classification in spatial databases. In *Proceedings of International symposium on Spatial Databases (SSD)*, China, pp. 207–226. Baeza-Yates, R., Ribeiro, B. (1999) *Modern information retrieval*. Addison Wesley. Funkhouser, T., Min, P., Kazhdan M., Chen, J., Halderman, A., Dobkin, D., & Jacobs, D. (2003) *A search engine for 3D models*. *ACM Trans. Graph.*, Jan. 22(1), pp. 83-105. Kaufman, A., Cohen, D., & Yagel, R. (1993) Volume graphics. *IEEE Computer*, 26(7), pp.51-64. Kazhdan, M., Funkhouser, T., & Rusinkiewicz, S. (2003) Rotation invariant spherical harmonic representation of 3D shape descriptors. *Symposium on Geometry Processing, June, 2003*. Osada, R., Funkhouser, T., Chazelle, B., & Dobkin, D. (2002) Shape distributions. *ACM Trans. Graph.*, 21(4), pp. 807–832. Raghavan, V., Bollmann, P., & Jung, G.S. (1989) A critical investigation of recall and precision as measures of retrieval system performance. *ACM Trans. Inf. Syst.*, pp. 205-229. Rui, Y., Huang, T.S., Ortega, M., Mehrotra, S. (1998) Relevance feedback: A power tool in interactive content-based image retrieval. *IEEE Transactions on Circuits and Systems for Video Technology*, 8(5), pp. 644-655. Samet, H. (1990) *The design and analysis of spatial data structures*. Addison-Wesley Publishing Company Shilane, P., Min P., Kazhdan, M., & Funkhouser, T. (2004) The princeton shape benchmark. In *Proceedings of Shape Modeling International*, Italy, pp. 167–179. Tangelder, J. & Velkamp, R.C. (2004) A survey of content based 3D shape retrieval methods. In *Proceedings of Shape Modeling International*, Italy, pp. 145-156.
A transformation-based combination framework for approximate reasoning Sergio A. Alvarez Worcester Polytechnic Institute, firstname.lastname@example.org Follow this and additional works at: http://digitalcommons.wpi.edu/computerscience-pubs Part of the Computer Sciences Commons Suggested Citation Alvarez, Sergio A. (1998). A transformation-based combination framework for approximate reasoning. . Retrieved from: http://digitalcommons.wpi.edu/computerscience-pubs/200 This Other is brought to you for free and open access by the Department of Computer Science at DigitalCommons@WPI. It has been accepted for inclusion in Computer Science Faculty Publications by an authorized administrator of DigitalCommons@WPI. A transformation-based combination framework for approximate reasoning WPI-CS-TR-98-22 Sergio A. Alvarez Department of Computer Science Worcester Polytechnic Institute \(^1\) Worcester, MA 01609 Abstract There are many contexts in which several quantitative measures that provide information about a given phenomenon are available and it is desired to combine these measures into a single measure that uses the information encoded in each of them. Examples include knowledge aggregation in knowledge-based systems [4], [16], lateralization measurement in neurobiology [2], [5], and relevance ranking in information retrieval. Mostly ad-hoc approaches are currently in use for this purpose in different domains. The objective of this paper is to introduce a rational framework that systematically provides families of combination operators for the integration of disparate measures in a variety of situations. Our approach uses a single canonical form to produce a multitude of different combination functions by choosing different geometric frames of reference in the space of measurement values. We show that previously used combination functions may be obtained through our approach in a natural way, that they may be easily modified and generalized for increased flexibility, and that new combination operators may be systematically generated. We provide a characterization of the differentiable combination functions that are expressible via conjugacy in terms of the canonical form and give an algorithm to construct an appropriate reference frame if one exists. We also address the asymptotic behavior of the combination functions produced by our framework when the number of source measures grows without bound. \(^1\)Portions of this paper are based on research carried out by the author at the Center for Nonlinear Analysis at Carnegie Mellon University. The author acknowledges partial support provided by the U.S. Army Research Office and the National Science Foundation. The author wishes to thank Bruce Buchanan for helpful discussions regarding knowledge revision in the expert system MYCIN. Introduction The issue of combination or aggregation of knowledge sources is central to many areas of applied science and engineering. Consider for example the problem of knowledge revision in belief systems. Various approaches to this problem in the presence of uncertainty are elegantly subsumed by the Shenoy-Shafer valuation network theory [16] in which a network of valuations encodes approximate knowledge about the joint values of collections of system variables and knowledge revision is reduced to the two basic operations of marginalization and combination of valuations. Specific forms of the combination functions are provided within formalisms such as Bayesian probability and Dempster-Shafer belief theory. In Bayesian probability the valuations are true probabilities and combination proceeds according to Bayes’ rule. The simplest version of this combines probabilities $p$ and $q$ by measuring the probability of the union of the corresponding events assuming independence between these events: $$f(p, q) = p + q - pq \quad (1)$$ In the Dempster-Shafer theory the valuations are so-called basic probability assignments and combination follows Dempster’s rule [15]. More ad-hoc approaches have also been proposed, as in the framework of certainty factors introduced into rule-based expert systems by the creators of the medical diagnosis system MYCIN [4]. In this method the valuations are numbers between $-1$ and 1 called certainty factors which represent confidence levels about both facts and inference rules. The MYCIN combination function takes two certainty factors $c_1$ and $c_2$ of different signs and combines them into a single certainty factor $c$ as follows: $$c = \frac{c_1 + c_2}{1 + \min(c_1, c_2)} \quad (2)$$ Because of the constraint on the signs of the $c_i$, this measure may be described as a difference measure rather than a sum measure like that in Eq. 1. Such difference measures are required also in the area of anatomical and functional lateralization measurement in biology [12], [5]. For example, in studying a bihemispheric brain one is interested in assessing the degree of asymmetry of the patterns of organization and functionality of the system. If one has access to two individual measures representing the competence of each of the hemispheres on some task of interest, then one may seek to combine these measures into a single measure of the lateral dominance of one hemisphere over the other as regards the given task. Related examples include the measurement of directional asymmetry in experimental psychology [6], [10] and high energy physics [7]. Prior work in the above mentioned areas has tended to use simple, ad-hoc measures of directional asymmetry, such as the standard arithmetic difference of the given unilateral measures. In the present paper we present a systematic approach for generating numerical combination functions and related difference measures. We will show that several previously used measures are subsumed by our framework and we will propose mechanisms that yield rational generalizations and modifications of these measures as well as completely new ones. Our framework is based on thinking of different measures as corresponding to the same canonical form viewed in different geometric frames on the space of measurement values. We consider measures of sum and difference type interchangeably by allowing sign changes in the arguments. For concreteness, we now phrase our fundamental postulate in terms of combination (sum) functions only. **Postulate (existence of a canonical form).** *Combination functions should reduce to the standard arithmetic sum in a suitably constructed frame.* In words, given an admissible combination function $f : V \times V \to V$, there should exist a suitable choice of *frame transformation* $\beta$ such that we have a commutative diagram as shown below, where $+$ denotes the usual arithmetic sum operator on the real line $\mathbb{R} = \beta(V)$: \[ \begin{array}{ccc} V \times V & \xrightarrow{\beta \times \beta} & \beta(V) \times \beta(V) \\ \downarrow f & & \downarrow + \\ V & \xrightarrow{\beta} & \beta(V) \end{array} \] Equivalently, the combination function $f$ should satisfy \[ f \left( \beta^{-1}(a), \beta^{-1}(b) \right) = \beta^{-1}(a + b) \tag{3} \] At first sight, the above may seem like an odd requirement. In the present paper we aim to show that the canonical form postulate is not only natural, being satisfied by commonly used combination operators already in existence and slight variations of them, but also constitutes a powerful source of new combination operators. In particular, through this unification, our new framework based on the canonical form and frame transformations provides a much needed theoretical foundation for combination operators. **Example 1.** Consider the case of the following simple probabilistic combination function which is often used for aggregation of measures of uncertainty in knowledge-based systems: \[ f(p, q) = p + q - pq \tag{4} \] Analysis shows that one may rewrite the function of Eq. 1 in the form given in Eq. 3, where the transformation $\beta : [0, 1] \to [0, \infty]$ that defines the “normalizing frame” is given by \[ \beta(x) = \log \left( \frac{1}{1 - x} \right) \tag{5} \] Indeed, the inverse of the frame transformation is \[ \beta^{-1}(y) = 1 - e^{-y} \tag{6} \] and by direct computation using Eq. 1 and Eq. 6 we confirm that Eq. 3 holds: \[ \begin{align*} f(\beta^{-1}(p), \beta^{-1}(q)) &= \beta^{-1}(p) + \beta^{-1}(q) - \beta^{-1}(p)\beta^{-1}(q) \\ &= 1 - (1 - \beta^{-1}(p))(1 - \beta^{-1}(q)) \\ &= 1 - e^{-p}e^{-q} \\ &= \beta^{-1}(p + q) \tag{7} \end{align*} \] The present paper provides, as part of a coherent theory, a method that allows one to construct an appropriate frame transformation $\beta$ as in Eq. 5 directly from the combination function $f$. Related results in the case of difference measures only were obtained in [1]. **Overview of the paper** We begin the paper by presenting a set of axioms which state the properties required for a combination function to be admissible. No assumptions are made about the particular method used to construct the functions at this point. We then study the class of combination functions defined from the canonical form via frame transformations on the range of the valuations as in the above commutative diagram and Eq. 3. We determine what properties a frame transformation must satisfy in order for the associated combination function to be admissible. We show that admissible frame transformations may be described “microscopically” in terms of a Riemannian metric associated with the *subjective difference measure* obtained from the frame transformation. We give examples of admissible frame transformations and the combination functions obtained from them. We then present a general method to extract a suitable normalizing frame transformation directly from a given combination function as was done for the above example in Eqs. 5 and 7. We conclude by describing the asymptotic behavior of aggregate values obtained via our transformation-based framework in the presence of an unbounded number of sources of information. **1 Admissible combination and difference functions** In this brief section we give axioms for the binary operations that we are interested in studying. The basic notion is that of a *combination function*, which is a generally nonlinear function that aggregates two different measurements into a single one. The simplest possible combination function is the arithmetic operation of addition. Just as addition yields subtraction by changing the sign of one of the arguments, any combination function gives rise to a *difference measure* in the same way. We give equivalent axioms for both combination functions and difference measures. One version or the other will typically be more immediately useful in a given context. For example, in lateralization measurement in computational neurobiology [2] one uses difference measures, while in knowledge aggregation in knowledge-based systems it is more natural to use combination functions. **Definition 1.1.** A function $\oplus : [0,+1] \times [0,+1] \rightarrow [0,+1]$ is an *admissible combination function* if and only if it satisfies the following axioms: **Commutativity** $p \oplus q = q \oplus p$ **Monotonicity** $(\cdot) \oplus q$ is an increasing function for each $q$ Boundary values \[ 0 \oplus q = q, \quad 1 \oplus q = 1 \] **Definition 1.2.** We define the *subjective difference measure* \( \ominus \) associated with the combination operator \( \oplus \) to be the operator \( \ominus \) defined as follows: \[ p \ominus q = p \oplus (-q) \] (8) The operator \( \ominus \) is said to be *symmetric* if it satisfies \[ q \ominus p = -(p \ominus q) \] (9) It is clear that \( \ominus \) is symmetric if and only if the associated combination operator \( \oplus \) satisfies **Belief / disbelief symmetry** \[ (-p) \oplus (-q) = -(p \oplus q) \] **Example 2.** The probabilistic combination operator given in the Example that appears in the Introduction is admissible in the sense of the above definition. The commutativity property clearly holds for this operator. Also, by rewriting the operator in the form \[ p \oplus q = p(1 - q) + q, \] it becomes apparent that \( p \oplus q \) increases as \( p \) increases if \( q \) is held fixed. Finally, the boundary values for the probabilistic combination operator are given by: \[ 0 \oplus q = 0(1 - q) + q = q, \quad 1 \oplus q = 1(1 - q) + q = 1 \] This proves admissibility as claimed. **Example 3.** The MYCIN combination function is admissible. Recall that this combination function is defined by: \[ p \oplus q = \frac{p + q}{1 + p \wedge q}, \] where \( p \wedge q \) denotes the minimum of the two numbers \( p \) and \( q \). The properties of commutativity and boundary values are easy to see. Verification of the monotonicity property is conceptually simple but requires an analysis by cases. Assume that \( q \) is fixed and that \( 1 \geq p' \geq p \geq 0 \). We must show that \( p' \oplus q \geq p \oplus q \). The difference \( p' \oplus q - p \oplus q \) equals \[ p' \oplus q - p \oplus q = \frac{p' + q}{1 + p' \wedge q} - \frac{p + q}{1 + p \wedge q} = \frac{p' - p + (p \wedge q)(p' + q) - (p' \wedge q)(p + q)}{(1 + p' \wedge q)(1 + p \wedge q)} \] (10) • Case 1: \( q \leq p \leq p' \). Then \( p \land q = q = p' \land q \), and the right-hand side of Eq. 10 becomes \[ \frac{p' - p + (p \land q)(p' + q) - (p' \land q)(p + q)}{(1 + p' \land q)(1 + p \land q)} = \frac{p' - p + q(p' + q) - q(p + q)}{(1 + q)^2} \] \[ = \frac{(p' - p)(1 + q)}{(1 + q)^2} \geq 0 \] • Case 2: \( p \leq q \leq p' \). Then \( p \land q = p \) and \( p' \land q = q \), so in Eq. 10 we have \[ \frac{p' - p + (p \land q)(p' + q) - (p' \land q)(p + q)}{(1 + p' \land q)(1 + p \land q)} = \frac{p' - p + p(p' + q) - q(p + q)}{(1 + p' \land q)(1 + p \land q)} \] \[ = \frac{p' - p + pp' - q^2}{(1 + q)(1 + p)} \] \[ \geq \frac{p' - p + pp' - (p')^2}{(1 + q)(1 + p)} \] \[ = \frac{(p' - p)(1 - p')}{(1 + q)(1 + p)} \geq 0 \] • Case 3: \( p \leq p' \leq q \). Then \( p \land q = p \) and \( p' \land q = p' \), and Eq. 10 yields: \[ \frac{p' - p + p(p' + q) - p'(p + q)}{(1 + p' \land q)(1 + p \land q)} = \frac{p' - p + pq - p'q}{(1 + p')(1 + p)} \] \[ = \frac{(p' - p)(1 - q)}{(1 + p')(1 + p)} \geq 0 \] This concludes the verification of the monotonicity property and thus establishes that the MYCIN combination function is admissible in the sense of Definition 1.1. In the next section we develop a framework that incorporates combination functions similar to those considered in the preceding examples and that yields new combination functions systematically. ## 2 The transformation framework As explained in the Introduction, our viewpoint is that generation of combination functions is equivalent to the construction of suitable frame transformations mapping the range of the valuations into the extended real number line \([-∞, +∞]\). The intuition behind this viewpoint is that a combination operator is really just the standard arithmetic sum viewed through the warped glasses of the frame transformation. Mathematically, each admissible choice of a frame transformation induces a pullback to the valuation interval (which we will assume is \([-1, 1]\)) of the standard vector space structure of the real numbers. Addition pulls back to a combination function and scalar multiplication pulls back to an operation which controls what we call the degree of skepticism of the members of the resulting family of combination functions. We develop the above concepts in the next few sections. We assume for simplicity that all valuations take values in the interval \([-1, 1]\). More general ranges of values can be dealt with by performing a straightforward preliminary symmetrization step as in [1]. 2.1 Combination functions as nonlinear sums We propose to consider as a combination function on the normalized measurement interval \([-1, +1]\) the binary operation \(\oplus_\beta\) on \([-1, +1]\) that is conjugate to the standard addition operation \(f(y_1, y_2) = y_2 + y_1\) on the interval \([-\infty, +\infty]\) via an appropriate frame transformation \(\beta\) from \([-1, +1]\) to \([-\infty, +\infty]\); we assume that \(\beta\) is an invertible and increasing map from \([-1, +1]\) onto \([-\infty, +\infty]\). In other words, we require that the diagram shown below be commutative, where \(+\) denotes the usual arithmetic sum operator on \((-\infty, +\infty)\): \[ (-1, +1) \times (-1, +1) \xrightarrow{\beta} (-\infty, +\infty) \times (-\infty, +\infty) \] \[ \downarrow \oplus_\beta \quad \quad \quad \quad \quad \quad \downarrow + \] \[ (-1, +1) \xrightarrow{\beta} (-\infty, +\infty) \] Equivalently, the combination function \(\oplus_\beta\) on \([-1, +1]\) is defined by \[ a \oplus_\beta b = \beta^{-1}(\beta(a) + \beta(b)) \] (11) Visually, the frame transformation \(\beta\) deforms the standard valuation interval \([-1, +1]\) into the valuation range \([-\infty, +\infty]\). Each point \(x\) of \([-1, +1]\) is mapped to a corresponding point \(\beta(x)\) of the interval \([-\infty, +\infty]\). Pairs of points are combined in \([-1, +1]\) in such a way that the result is the point that is mapped by the frame transformation \(\beta\) into the arithmetic sum of the images of these points. Different frame transformations define different deformations and thus lead to different combination functions, with the exception that frame transformations that are constant multiples of one another lead to the same combination function (c.f. the proof of Theorem 3.1). 2.2 Admissible frame transformations We now consider the question of determining the properties that must be satisfied by a frame transformation \(\beta : [-1, +1] \to [-\infty, +\infty]\) so that the combination function associated to \(\beta\) via conjugation as in Eq. 11 is admissible in the sense of Definition 1.1. Such a mapping \(\beta\) is called an *admissible frame transformation*. **Proposition 2.1.** A mapping \(\beta : [-1, +1] \to [-\infty, +\infty]\) is admissible if and only if it is increasing and satisfies the boundary conditions \(\beta(0) = 0\), \(\beta(+1) = +\infty\). **Proof.** Recall the definition of \(\oplus\) in terms of \(\beta\) from Eq. 11: \[ p \oplus q = \beta^{-1}(\beta(p) + \beta(q)) \] This definition assumes that \(\beta\) is an invertible mapping from \([-1, +1]\) to \([-\infty, +\infty]\). Thus, \(\beta\) must be either strictly increasing or strictly decreasing. We will now prove the necessary boundary conditions \(\beta(0) = 0\), \(\beta(1) = \infty\), which imply that \(\beta\) is increasing. Letting \(q = 0\) above, we have: \[ \beta^{-1}(\beta(p)) = p = p \oplus 0 = \beta^{-1}(\beta(p) + \beta(0)) \] This equation holds for all values of $p$ if and only if $\beta(0) = 0$. Next, let $q = 1$ above. Then we have: $$\beta^{-1}(\beta(1)) = 1 = p \oplus 1 = \beta^{-1}(\beta(p) + \beta(1))$$ This equation is equivalent to $\beta(+1) = +\infty$. Thus, we have proved that admissibility is equivalent to the boundary conditions given in the statement of the Proposition. It is straightforward to interpret the commutativity and belief / disbelief symmetry axioms for the corresponding combination function (as given following Definition 1.1) in terms of the frame transformation $\beta$, as we now show. **Proposition 2.2.** An admissible frame transformation $\beta : [-1, +1] \to [-\infty, +\infty]$ yields an associated combination function that satisfies the belief/disbelief symmetry property if and only if $\beta$ has odd symmetry about 0, i.e. $\beta(-x) = -\beta(x)$. **Proof.** Let $q = -p$. Then assuming belief / disbelief symmetry and commutativity: $$(p \oplus (-p)) = -((-p) \oplus p) = -(p \oplus (-p)),$$ so that $$p \oplus (-p) = 0,$$ and therefore using the definition of $\oplus$: $$\beta^{-1}(\beta(p) + \beta(-p)) = 0 \quad (12)$$ Applying $\beta$ to both sides of this equation we see that $$\beta(p) + \beta(-p) = \beta(0), \quad (13)$$ and letting $p = 0$ in particular it follows that $$\beta(0) = 0$$ Eq. 13 now yields the desired conclusion that $\beta$ has odd symmetry about 0. Conversely, if we know that $\beta$ has odd symmetry about 0 then so does its inverse $\beta^{-1}$, and we see by Eq. 11 that the corresponding combination function $\oplus$ is commutative and exhibits belief / disbelief symmetry. This completes the proof of the Proposition. ### 2.3 The pulled-back metric If one considers the subjective difference measure $\ominus$ as defined in Eq. 8, one may view the analog of Eq. 11 defining the combination function $\oplus$ via the frame transformation $\beta$ as involving two distinct steps. In the first step, the pair $(a, b)$ is mapped to the difference $\beta(a) - \beta(b)$, which is simply the *signed* version of the pullback to $[-1, 1]$ via $(\beta, I_R)$ of the Euclidean metric on the real line R. In the second step, this signed distance function is pulled back to a metric on $[-1, +1]$ via $\beta$. Explicitly, the pulled-back metric referred to here is given by: $$d(a, b) = |\beta(a) - \beta(b)| \quad (14)$$ Assuming that $\beta$ is differentiable, the pulled-back metric is a Riemannian metric (see, e.g., [11]) on $[-1,+1]$ with length element $ds$ given by: $$ds = \beta'(x)dx$$ \hspace{1cm} (15) Observe that since $\beta(0) = 0$ by Proposition 2.1, the frame transformation $\beta$ may be expressed in terms of the blown-up metric $ds = \beta'(x)dx$ quite simply: $$\beta(x) = \int_0^x \beta'(u)du$$ \hspace{1cm} (16) Thus, the frame transformation $\beta$ and the blown-up metric $\beta'(x)dx$ are completely equivalent: given either one of the two, the other can be constructed without difficulty. Since the frame transformation leads directly to the corresponding combination function, this implies that the combination function may also be constructed from the metric. In section 3 we will show that the metric may be constructed from the combination function (Theorem 3.1). Together with the above comments, this will show that the three basic objects of our theory, the combination function, the frame transformation, and the metric, are completely equivalent, so that if one of the three is specified then the other two may be constructed from it. ### 2.4 Nonlinear scaling and weighted combinations Given a combination function $\oplus : [-1,+1] \rightarrow [-1,+1]$ obtained via a blow-up transformation $\beta : [-1,+1] \rightarrow [-\infty,+\infty]$, a new combination function is obtained by letting the group of scalings $x \mapsto tx$ for $t \in R^+$ act on $[-1,+1]$ via conjugation by the blow-up transformation $\beta$. Thus we have the commutative diagram shown below: $$\begin{array}{ccc} [-1,+1] & \xrightarrow{\beta} & [-\infty,+\infty] \\ \downarrow{\beta^{-t}} & & \downarrow{x \mapsto tx} \\ [-1,+1] & \xrightarrow{\beta} & [-\infty,+\infty] \end{array}$$ The collection of pullbacks $\beta^{-t}$ forms a group of nonlinear scalings of the measurement interval $[-1,+1]$. The pulled-back scaling by $t$ is given by: $$(\beta^{-t})x = \beta^{-1}(t\beta(x))$$ \hspace{1cm} (17) If we let the pulled-back scaling act on the combination function $\oplus_\beta$ on $[-1,+1]$ conjugate to the difference operator on $[-\infty,+\infty]$ via $\beta$, we obtain the following new combination function: $$p \oplus_t q = \beta^{-1}\left(t\beta\left(\beta^{-1}(\beta(p) + \beta(q))\right)\right) = \beta^{-1}\left(t(\beta(p) + \beta(q))\right)$$ \hspace{1cm} (18) The corresponding blown-up metric on $[-1,+1]$ is: $$ds = t\beta'(x)dx$$ \hspace{1cm} (19) We note that the scaled $t$-version of the combination function fails to be associative unless $t = 1$. Furthermore, it is not admissible in the sense of Definition 1.1, as it fails to satisfy the boundary conditions $p \oplus_t 0 = p$, $p \oplus_t 1 = 1$. However, nonlinear scaling is quite useful in providing a scale of functions parametrized according to their *degree of skepticism*. By the latter term we are referring to the weight accorded to new information. This can be measured by comparing the a priori value $p$ with the quantity $$p \oplus_t 0 = \beta^{-1}(t\beta(p)) \tag{20}$$ which is the confidence assigned to the certainty level $p$ by the combination function $\oplus_t$. The right-hand side of Eq. 20 is simply the result of scaling $p$ by $t$ as viewed in the frame defined by the transformation $\beta$ (c.f. Eq. 17). We now define the degree of skepticism as the fraction of the confidence level that is rejected by the combination function $\oplus$. Notice that any combination function that satisfies the boundary condition $p \oplus 0 = p$ required for admissibility will automatically have a marginal skepticism of 0. **Definition 2.1.** The *marginal skepticism* of a combination function $\oplus$ is the quantity $$\sigma(\oplus) = \lim_{p \to 0} \left(1 - \frac{p \oplus 0}{p}\right) \tag{21}$$ The value of $t$ determines the marginal skepticism of the scaled combination function $\oplus_t$ in the following very simple way. **Proposition 2.3.** $$\sigma(\oplus_t) = 1 - t \tag{22}$$ **Proof.** By definition of the marginal skepticism $\sigma$ we have $$\sigma(\oplus_t) = 1 - \frac{d}{dp}|_{p=0} (p \oplus_t 0)$$ It suffices to compute the derivative that appears on the right-hand side of this equation. Using the fact that $\beta(0) = 0$ (by Proposition 2.1) one finds: $$\frac{d}{dp}|_{p=0} (\beta^{-1}(t\beta(p))) = \left(t \frac{\beta'(p)}{\beta'(\beta^{-1}(t\beta(p)))}\right)|_{p=0} = t$$ This concludes the proof of the Proposition. Observe that the resulting skepticism in Proposition 2.3 is independent of the choice of frame transformation $\beta$. For $t = 1$ the skepticism is 0: confidence estimates are accepted at face value. Values of $t$ greater than 1 yield negative skepticism, i.e. the combination function $\oplus_t$ amplifies confidence estimates. Values of $t$ less than 1 yield skeptical combination functions that accept only a fractional portion of an incoming confidence estimate. We will say more below about the level of skepticism in connection with the rate at which consensus is attained in the presence of multiple sources of information. Analogously, one may consider the nonlinear conjugated versions of linear combinations. In this way, one obtains operators such as the following: \[ p \oplus_{s,t} q = \beta^{-1}(s\beta(p) + t\beta(q)) \] (23) If the parameters \( s \) and \( t \) are chosen to satisfy \( 0 \leq s \leq 1, \ 0 \leq t \leq 1, \ s + t = 1 \), then Eq. 23 yields a nonlinear version of the convex combination operator \((x, y) \mapsto sx + ty\). Certain properties of convex combinations are shared by the nonlinear version. For example, one recovers one argument or the other as the parameter \( s \) approaches one of its limiting values: \[ p \oplus_{s,1-s} q \longrightarrow q \text{ as } s \rightarrow 0 \] \[ p \oplus_{s,1-s} q \longrightarrow p \text{ as } s \rightarrow 1 \] Intermediate values of the weight parameter \( s \) yield other combinations of \( p \) and \( q \); the closer \( s \) is to 0, the lower the weight accorded to \( p \) will be, while if \( s \) is close to 1 then \( p \) will be weighted more heavily than \( q \) in the combination. Notice that although this behavior is shared by the standard convex combination operators, the standard operators fail to satisfy the boundary conditions \( 0 \oplus q = q \) and \( 1 \oplus q = 1 \). The new weighted nonlinear operators should be useful for purposes such as the combination of relevance ratings in information retrieval and the combination of preference ratings in recommendation systems (collaborative filtering). In such contexts the weights \( s \) and \( t \) may be used to give higher credence to certain information sources over others, based perhaps on prior experience. ### 2.5 Some admissible combination functions **Example 2.1 (Inverse hyperbolic tangent frame \( \beta(x) = \tanh^{-1}(x) \)).** Using the fact that \[ \tanh^{-1}(x) = \frac{1}{2} \log \left( \frac{1+x}{1-x} \right) \] (24) we see that if we choose the frame transformation \( \beta \) to be the function \( \tanh^{-1} \) in Eq. 11 then we obtain the following very simple expression for the associated combination function: \[ p \oplus q = \tanh \left( \tanh^{-1} p + \tanh^{-1} q \right) = \frac{p+q}{1+pq} \] (25) Nonlinear scaling by \( t \) as in Eq. 17 is given for the present choice of \( \beta \) by: \[ (\beta^{-t}) x = \frac{(1+x)^t - (1-x)^t}{(1+x)^t + (1-x)^t} \] (26) and it follows from Eq. 24 and from the identity for the hyperbolic tangent of a sum contained in Eq. 25 that the combination function of Eq. 25 embeds as the case \( t = 1 \) of the family: \[ p \oplus_t q = \tanh \left( t \left( \tanh^{-1}(p) + \tanh^{-1}(q) \right) \right) = \frac{\left( \frac{1+p}{1-p} \right)^t - \left( \frac{1-q}{1+q} \right)^t}{\left( \frac{1+p}{1-p} \right)^t + \left( \frac{1-q}{1+q} \right)^t} \] (27) The pulled back Riemannian metric is given as in Eq. 15 by: \[ ds = t \left( \tanh^{-1} \right)'(x) dx = \frac{tdx}{1 - x^2} \] (28) Nonlinear weighting leads to the operators \[ p \oplus_{s,t} q = \tanh \left( s \tanh^{-1}(p) + t \tanh^{-1}(q) \right) = \frac{\left( \frac{1+p}{1-p} \right)^s - \left( \frac{1-q}{1+q} \right)^t}{\left( \frac{1+p}{1-p} \right)^s + \left( \frac{1-q}{1+q} \right)^t} \] (29) As shown in [1], the inverse hyperbolic tangent frame transformation admits very interesting interpretations in terms of probability, Dempster-Shafer evidence theory, and the special theory of relativity. **Example 2.2 (Tangent frame \( \beta(x) = \frac{2}{\pi} \tan(\frac{\pi}{2} x) \)).** This choice yields the following family of combination functions: \[ p \oplus_t q = \frac{2}{\pi} \tan^{-1} \left( \frac{t \sin(\frac{\pi}{2}(p + q))}{\cos(\frac{\pi}{2} p) \cos(\frac{\pi}{2} q)} \right) \] (30) The pulled back metric on the standard interval \([-1, +1]\) is: \[ ds = \frac{tdx}{\cos^2(\frac{\pi}{2} x)} \] (31) A significant difference between the tangent frame transformation considered here and the inverse hyperbolic tangent frame transformation of the preceding Example lies in their asymptotic behavior. The values of the hyperbolic tangent approach \(+1\) exponentially fast as the argument approaches \(+\infty\). On the other hand, the values of \(2/\pi\) times the arctangent of \(y\) approach the limiting value \(+1\) at the rate \(1/y\) as \(y \to +\infty\). We show below in Proposition 4.1 that this difference in asymptotic rates leads to a corresponding difference in the rates at which the combination functions based on these frame transformations aggregate values produced by a large number of source measures. ### 3 Recovering the frame transformation from the combination function In this section we address the issue of determining whether a given combination function \(\oplus\) that is admissible in the sense of Definition 1.1 is expressible via some frame transformation \(\beta\) as in Eq. 11. Our solution to this problem may be seen as a two-step process. We first show how to construct a special candidate frame transformation \(\beta_\oplus\) directly from the original combination function \(\oplus\). In order to determine whether \(\oplus\) is transformation-based, one merely needs to check whether it is expressible in terms of this single special frame transformation $\beta_\oplus$. The second step of our process provides a method for checking whether $\oplus$ is expressible via $\beta_\oplus$. Our frame transformation recovery process is useful from a practical point of view since it provides an explicit method for constructing a frame transformation that yields a given combination function. With such a frame transformation in hand, one may proceed to generalize the original combination function by using nonlinear scaling operations as described in the preceding sections. Furthermore, our results are interesting from a theoretical viewpoint, as they show the equivalence of three basic objects of our theory: the combination function $\oplus$, the frame transformation $\beta$, and the blown-up metric $\beta'(x)dx$. A frame transformation $\beta$ may easily be expressed in terms of the corresponding blown-up metric $ds = \beta'(x)dx$ as in Eq. 16. The following result shows that the metric $\beta'(x)dx$ and the frame transformation $\beta$ may be recovered (modulo a scale factor) from the combination function $\oplus$. **Theorem 3.1.** Let $(p,q) \mapsto p \oplus q$ be a continuously differentiable combination operator such that $p \oplus z(p) = 0$ for some function $p \mapsto z(p)$. Let $\beta$ denote an arbitrary frame transformation. Then the following statements are equivalent: 1. $\oplus$ is conjugate to the arithmetic sum operator via the frame transformation $\beta$ 2. $\beta$ is of the form $C\beta_\oplus$, where $C$ is a nonzero constant and $\beta_\oplus$ is the special frame transformation defined by: $$\beta_\oplus(p) = \int_0^p (\partial_1 \oplus)(x,z(x)) \ dx \tag{32}$$ 3. $\oplus$ is conjugate to the arithmetic sum operator via the frame transformation $\beta_\oplus$ as given in Eq. 32 4. The composite function $\phi := \beta_\oplus \circ \oplus$ satisfies the partial differential equation $$\partial_1 \partial_2 \phi = 0 \tag{33}$$ **Proof.** - ((1) implies (2)): If $\oplus$ is conjugate to $+$ via $\beta$, then we must have: $$\beta(p \oplus q) = \beta(p) + \beta(q) \tag{34}$$ Taking partial derivatives with respect to $p$ we obtain: $$\beta'(p \oplus q)(\partial_1 \oplus)(p,q) = \beta'(p) \tag{35}$$ Letting $q = z(p)$, and observing that $p \oplus z(p) = 0$ we have: $$\beta'(0)(\partial_1 \oplus)(p,z(p)) = \beta'(p) \tag{36}$$ Therefore: \[ \beta'(p) = \beta'(0)(\partial_1 \oplus)(p, z(p)) \] (37) Integration w.r.t. \( p \) now yields \( \beta = C\beta_0 \), with \( C = \beta'(0) \): \[ \beta(p) = \beta'(0) \int_0^p (\partial_1 \oplus)(x, z(x)) \, dx \] (38) This proves that (2) holds. - ((2) implies (3)): Just observe that the conjugacy condition is invariant under scalings. That is, if one assumes that \( \oplus \) is conjugate to \( + \) via \( \beta \): \[ p \oplus q = \beta^{-1}(\beta(p) + \beta(q)), \] and if \( K \) is any nonzero constant, then since the inverse of the scaled transformation \( K\beta \) is given by \[ (K\beta)^{-1}(y) = \beta^{-1}\left(\frac{y}{K}\right), \] the fact that multiplication by \( K \) distributes over addition yields \[ p \oplus q = (K\beta)^{-1}((K\beta)(p) + (K\beta)(q)), \] so that \( \oplus \) is also conjugate to \( + \) via the scaled transformation \( K\beta \). Choosing \( K = 1/C \), one now obtains (3) from (2). - ((3) implies (4)): If (3) holds, then \( \phi(p, q) := \beta_0(p \oplus q) = \beta_0(p) + \beta_0(q) \) clearly satisfies \( \partial_1 \partial_2 \phi = 0 \). - ((4) implies (1)): Suppose that (4) holds. Then iterated partial integration shows that there exist functions \( \alpha_1 \) and \( \alpha_2 \) such that \[ \beta_\oplus(p \oplus q) = \phi(p, q) = \alpha_1(p) + \alpha_2(q) \] Notice that the functions \( \alpha_i \) are defined only up to an additive constant. Assume without loss of generality that \( \alpha_2(0) = 0 \). Since \( p \oplus 0 = p \), we obtain \[ \beta_\oplus(p) = \phi(p, 0) = \alpha_1(p) + \alpha_2(0) = \alpha_1(p) \] Now use the fact that \( 0 \oplus q = q \). We must have \[ \beta_\oplus(q) = \phi(0, q) = \alpha_1(0) + \alpha_2(q) \] \[ = \beta_\oplus(0) + \alpha_2(q) = \alpha_2(q) \] We have now shown that both \( \alpha_1 \) and \( \alpha_2 \) equal \( \beta_\oplus \), so that \[ \beta(p \oplus q) = \phi(p, q) = \beta_\oplus(p) + \beta_\oplus(q) \] We know that \( \beta_\oplus \) is strictly increasing and therefore invertible on its image, so we conclude that \( \oplus \) is conjugate to the standard arithmetic sum operator \( + \) via the frame transformation \( \beta_\oplus \). This proves (2) and (1), and concludes the proof of the Theorem. **Example 4.** Let us now re-examine the probabilistic combination operator considered in the Example of the Introduction in the light of the above Theorem. Recall the form of the combination operator: \[ p \oplus q = p + q - pq \] Notice that \( p \oplus \frac{p}{p-1} = 0 \), i.e. we have \( z(p) = \frac{p}{p-1} \). The frame transformation \( \beta \) must satisfy: \[ \beta(p + q - pq) = \beta(p) + \beta(q) \] Differentiating with respect to \( p \) and letting \( q = \frac{p}{p-1} \) we have: \[ \beta'(0) \frac{1}{1-p} = \beta'(p) \] As in the proof of the Theorem we may assume that \( \beta'(0) = 1 \), so that the metric is given by \[ \beta'(p) = \frac{1}{1-p} \] and we obtain the frame transformation \[ \beta(p) = \log \left( \frac{1}{1-p} \right) \] as given in the Example of the Introduction. A straightforward calculation shows that the composite operator \( \phi := \beta \circ \oplus \) satisfies the partial differential equation given in the Theorem. Indeed, we have \[ \phi(p, q) = -\log (1 - p - q + pq) \] so that \[ \partial_p \phi(p, q) = \frac{1-q}{1-p-q+pq} \] and therefore \[ \partial_q \partial_p \phi(p, q) = \frac{1-p-q+pq-(1-q)(1-p)}{(1-p-q+pq)^2} = 0 \] as claimed. The implication (3) \( \Rightarrow \) (2) of the Theorem thus implies that we have found a correct conjugating frame transformation \( \beta \). Of course, even before checking that \( \phi \) satisfies the partial differential equation one may have noticed that \( \phi(p, q) \) may be decomposed as follows: \[ \phi(p, q) = -\log ((1-p)(1-q)) = \log \left( \frac{1}{1-p} \right) + \log \left( \frac{1}{1-q} \right) \] which is merely a restatement of the conjugacy condition itself. Nonetheless, in more involved examples it may be difficult to see that the analogous decomposition holds for \( \phi \) in such a direct fashion; in such cases it is advantageous to apply the partial differential equation criterion as was done above. Physical Interpretation of Theorem 3.1 The characterization in Theorem 3.1 of the composition $\phi = \beta \circ \oplus$ as the solution of a partial differential equation may be interpreted in physical terms related to wave propagation. Defining new variables $(x, y)$ from the variables $(p, q)$ by: $$x = p + q$$ $$y = p - q$$ the partial differential condition on $\tilde{\phi}(x, y) := \phi(p, q)$ becomes $$\frac{\partial}{\partial x^2} \tilde{\phi} = \frac{\partial}{\partial y^2} \tilde{\phi}$$ (39) which is the classical equation describing linear wave propagation [17]. The new variables $x$ and $y$ may be interpreted as space and time. The old variables $p$ and $q$ represent position as viewed in frames moving at the wave velocity in opposite directions. Taking into account the restriction that in the old variables $$\phi(p, z(p)) = 0,$$ we have in the new variables $$\tilde{\phi}(x, y)|_{x-y=z(x+y)} = 0$$ (40) Equations 39 and 40 together constitute a so-called boundary-value problem. As is known from the theory of partial differential equations, an additional boundary condition should be specified in order for the boundary-value problem to be uniquely solvable. For example, information concerning the rate of change of the function $\phi$ in a direction transverse to the “zero curve” $q = z(p)$ (or $x - y = z(x + y)$) would be sufficient. In any case, we see that the function $\phi$ may be constructed by specifying “initial data” on the curve $q = z(p)$ and allowing this information to “propagate” via the wave equation (Eq. 39). 4 Asymptotic consensus growth In this section we address the growth of the degree of consensus in the presence of multiple sources of information. We assume that an infinite sequence of observations, each having certainty value $p$, is provided to a system that uses a combination function $\oplus$ to aggregate certainty values. The main issue is to quantitatively describe the aggregation of certainty as the number of observations increases without bound. The issue of the asymptotic consensus growth rate is an important one. For example, the creators of MYCIN encountered difficulties associated with the fact that their combination function leads to very rapid growth of consensus [3]. We will show that our framework allows the growth rate to be controlled by choosing appropriate frame transformations. We will also show that the degree of skepticism of nonlinearly scaled transformation-based combination functions is reflected in the asymptotic consensus value as the number of sources increases. Concretely, the situation at hand is as follows. Given a combination function $\oplus$ and given a number $p$ between 0 and 1, consider the sequence $(p_n)_{n \in \mathbb{N}}$ defined by: \begin{align} p_0 &= 0 \\ p_{n+1} &= p_n \oplus p \end{align} (41) In words, $p_n$ is the combined degree of certainty associated with $n$ certainty judgements of value $p$, according to the combination function $\oplus$. We are interested in determining the behavior of $p_n$ for large values of $n$. ### 4.1 Admissible transformation-based combination functions Let us begin by illustrating the sort of analysis that we are interested in, for the special case of the probabilistic combination function given in the Example of the Introduction. In this case one obtains the following sequence of combined certainty estimates as in Eq. 41: \begin{align} p_0 &= 0 \\ p_{n+1} &= p_n + p - p_n p = p_n (1 - p) + p \end{align} (42) The $p_n$ are therefore the partial sums of a geometric sequence: \begin{align} p_n &= p \sum_{j=0}^{n-1} (1 - p)^j = 1 - (1 - p)^n \end{align} (43) and approach the limiting value 1 exponentially fast as $n \to 0$. Our analysis in terms of frame transformations below will show that this rate of convergence follows from the asymptotic behavior of the inverse frame transformation in this case. **Proposition 4.1.** Let $\oplus$ be an admissible transformation-based combination function with associated frame transformation $\beta$. Define the sequence $(p_n)$ of combined values as in Eq. 41. Then \begin{align} p_n &\to 1 \quad \text{as} \quad n \to \infty \end{align} (44) Furthermore, convergence occurs at the rate \begin{align} p_n &= \beta^{-1}(Cn) \end{align} (45) with $C = \beta(p)$, where $p$ is the confidence value that generates the sequence $(p_n)$. **Proof.** Start with a combination function based on a frame transformation $\beta$ as in Eq. 11: \begin{align} a \oplus_\beta b &= \beta^{-1}(\beta(a) + \beta(b)) \end{align} (46) The sequence of combined certainty estimates defined in Eq. 41 becomes: \begin{align} p_0 &= 0 \\ p_{n+1} &= \beta^{-1}(p_n + p) \end{align} (47) Define: \[ \pi_n = \beta(p_n), \quad \pi = \beta(p) \] (48) Then one has \[ \begin{align*} \pi_0 &= 0 \\ \pi_{n+1} &= \pi_n + \pi \end{align*} \] (49) Therefore: \[ \pi_n = n\pi, \] (50) so that the \( p_n \) approach \( \beta^{-1}(\infty) = 1 \) as \( n \to \infty \). The rate of consensus growth is determined by the asymptotic behavior of the frame transformation \( \beta \). Indeed, by Eq. 50 one obtains \[ p_n = \beta^{-1}(n\beta(p)) \] This completes the proof of the Proposition. The preceding Proposition shows that if \( \beta^{-1}(x) \) approaches 1 exponentially fast as \( x \to \infty \), as is the case for the probabilistic combination function considered above, for which \( \beta^{-1}(y) = 1 - e^{-y} \), then \( p_n \) also approaches 1 exponentially fast as \( n \to \infty \). Other growth rates translate from \( \beta^{-1} \) to the sequence \( p_n \) analogously. For example, the tangent frame transformation yields a sequence \( p_n \) that approaches 1 like \( 1/n \). This provides the ability to control the asymptotic consensus growth rate, thus offering a way to avoid the problems encountered with the MYCIN combination function. ### 4.2 Skeptical combination functions Next we are interested in studying the nature of consensus growth for skeptical transformation-based combination functions. Specifically, consider the combination function \( \oplus_t \) corresponding to the frame transformation \( \beta \) including nonlinear scaling by \( t \) as in Eq. 18: \[ p \oplus_t q = \beta^{-1}\left(t\beta\left(\beta^{-1}(\beta(p) + \beta(q))\right)\right) = \beta^{-1}\left(t(\beta(p) + \beta(q))\right) \] (51) In particular, if \( \beta = \tanh^{-1} \) then one has the combination function \[ p \oplus_t q = \frac{\left(\frac{1+p}{1-p}\right)^t - \left(\frac{1-q}{1+q}\right)^t}{\left(\frac{1+p}{1-p}\right)^t + \left(\frac{1-q}{1+q}\right)^t} \] (52) The parameter \( t \) is a positive number but is otherwise free. If \( t = 1 \), this combination function is rather similar to the MYCIN combination function of Eq. 2: \[ p \oplus_1 q = \frac{\left(\frac{1+p}{1-p}\right) - \left(\frac{1-q}{1+q}\right)}{\left(\frac{1+p}{1-p}\right) + \left(\frac{1-q}{1+q}\right)} = \frac{p + q}{1 + pq} \] (53) It was shown above that, regardless of the choice of frame transformation $\beta$, the nonlinearly scaled operator $\oplus_t$ exhibits skeptical behavior when $t < 1$. We measure the degree of skepticism using the notion of marginal skepticism; we showed that the marginal skepticism of $\oplus_t$ is $1 - t$. We study the convergence of the sequence $p_n$ associated by the combination function $\oplus_t$ to a collection of $n$ judgements of certainty $p$. With notation as above we have: \begin{align} p_0 &= 0 \\ p_{n+1} &= \beta^{-1}(t(\beta(p_n) + \beta(p))) \end{align} In contrast to the case of admissible combination functions discussed above, for the nonlinearly scaled combination functions $\oplus_t$ with $t < 1$, the rate of convergence of the $p_n$ toward their limiting value is always exponential. However, the limiting value $p_\infty$ depends on the scaling parameter $t$ and may therefore be controlled. **Proposition 4.2.** Let $\oplus$ be an admissible transformation-based combination function with associated frame transformation $\beta$. Consider the sequence $(p_n)$ defined in terms of the skeptical $t$-version $\oplus_t$ of $\oplus$ as in Eq. 54. Then $$p_n \to \beta^{-1}(C\beta(p)) \quad \text{as} \quad n \to \infty,$$ where $C = t/(1-t)$. In particular, if $t < 1$ then the limiting value is strictly less than 1. The rate of convergence is exponential whenever $t < 1$. **Proof.** Define $$\pi_n = \beta(p_n), \quad \pi = \beta(p)$$ Then one has \begin{align} \pi_0 &= 0 \\ \pi_{n+1} &= t(\pi_n + \pi) \end{align} If $t = 1$, one then sees that $\pi_n = n\pi$, so that the $p_n$ approach $\beta^{-1}(\infty) = 1$ as $n \to \infty$ as described above in our analysis for admissible combination functions. In the case $t < 1$, the linear recurrence in Eq. 56 may be solved by using the method of variation of constants, yielding: $$\pi_n = \sum_{j=0}^{n-1} t^{n-j}t\pi = \frac{t\pi}{1-t}(1-t^n),$$ Eq. 57 shows that the rate of convergence toward the limiting value is always exponential in the case $t < 1$. It also follows in the case $t < 1$ that the limiting value as $n \to \infty$ is: $$\pi_\infty = \frac{t\pi}{1-t}$$ The limiting value of the $p_n$ is now obtained from Eq. 58 by using Eq. 55: $$p_\infty = \beta^{-1}(\pi_\infty) = \beta^{-1}\left(\frac{t\pi}{1-t}\right)$$ This completes the proof. Proposition 4.2 shows that the asymptotic limit $p_\infty$ of the $p_n$ is obtained from the “seed” value $p$ by a nonlinear scaling transformation with steepness parameter $t/(1-t)$. The limit $p_\infty$ of the $p_n$ is the inverse image via $\beta$ of the finite number on the right-hand side of Eq. 58 and is thus strictly less than 1. For example, if $t = 1/2$ one has $p_\infty = p$. Values of $t$ greater than 1/2 yield values of $p_\infty$ between $p$ and 1, while values of $t$ smaller than 1/2 yield values of $p_\infty$ less than $p$, which is “skeptical” behavior. An asymptotic version of the degree of marginal skepticism of Definition 2.1 may be defined here in a natural way: $$\sigma_\infty = 1 - \lim_{p \to 0} \frac{p_\infty}{p}$$ It is easy to see that the asymptotic marginal skepticism $\sigma_\infty$ is given here by: $$\sigma_\infty = 1 - \frac{t}{1-t}$$ In terms of the marginal skepticism $\sigma = 1 - t$ of the combination function $\oplus_t$, one has: $$\sigma_\infty = 2 - \frac{1}{\sigma}$$ Thus, the asymptotic marginal skepticism is an increasing function of the marginal skepticism of the underlying combination function $\oplus_t$. **Conclusions** We have presented a new framework which provides a unified foundation for the construction of combination operators for use in such areas as confidence aggregation in knowledge-based systems, relevance rating combination in information retrieval, and lateralization assessment in neurobiology. Our framework is based on the postulate that different combination operators are warped versions of the standard arithmetic sum operator as viewed in appropriate frames of reference. We have given examples showing that certain probabilistic combination operators and MYCIN-like combination operators arise in this way. In addition to unifying such previously considered operators, our framework provides a nonlinear scaling mechanism that allows one to modify a given combination operator by providing parametrized families of operators that extend the original operator. We have shown that this feature allows control over the degree of skepticism of the operators, i.e. their sensitivity to new information. We provide an algorithmic method to check whether a given combination operator fits into our framework or not, and that constructs an appropriate reference frame relating the operator to the arithmetic sum operator whenever such a frame exists. Furthermore, we have shown that our framework makes it easy to construct new combination operators, merely by selecting among the infinitely many admissible frame transformations available. Finally, we have shown that our framework provides control over the rate at which the combined measure increases when combining a large number of source measures. This should allow one to address the difficulties associated with excessively high convergence rates such as those produced by the ad-hoc combination operator used in the classical knowledge-based system MYCIN. References [1] S.A. Alvarez. “Rational comparison of probabilities via a blow-up conjugacy”, Technical Report No. 97-NA-010, Center for Nonlinear Analysis, Carnegie Mellon University, Aug. 1997 [2] S.A. Alvarez, S.L. Levitan, J.A. Reggia. “Metrics for Cortical Map Organization and Lateralization”, Bulletin of Mathematical Biology, vol. 60 (1998), 27-47 [3] B.G. Buchanan. Personal communication [4] B.G. Buchanan, E.H. Shortliffe (eds.). Rule-Based Expert Systems: the MYCIN experiments of the Stanford heuristic programming project, Addison-Wesley, 1984 [5] R.J. Davidson, K. Hugdahl. Brain Asymmetry, MIT Press, 1995 [6] L.B. Day, P.F. MacNeilage. “Postural Asymmetries and Language Lateralization in Humans (Homo sapiens)”, Journal of Comparative Psychology, vol. 110 (1996), no. 1, 88-96 [7] G. Giacomelli, R. Giacomelli. “Results from Accelerator Experiments: The Left-Right Asymmetry at SLAC”, http://axoph1.cern.ch/papers/results_from_acc_exp/results_from_acc_exp.html [8] W.D. Hopkins. “Hand Preferences for a Coordinated Bimanual Task in 110 Chimpanzees (Pan troglodytes): Cross-Sectional Analysis”, Journal of Comparative Psychology, vol. 109 (1995), no. 3, 291-297 [9] S.L. Lauritzen and P.P. Shenoy. “Computing marginals using local computation”, preprint [10] P. Maruff, D. Hay, V. Malone, J. Currie. “Asymmetries in the Covert Orienting of Visual Spatial Attention in Schizophrenia”, Neuropsychologia, vol. 13 (1995), no. 10, 1205-1223 [11] B. O’Neill. Semi-Riemannian geometry with applications to relativity, Pure and Applied Mathematics, vol. 103, Academic Press, 1983 [12] A.R. Palmer, C. Strobeck. “Fluctuating asymmetry as a measure of developmental stability: Implications of non-normal distributions and power of statistical tests”, *Acta Zoologica Fennica*, vol. 191 (1992), 55-70 Also see http://gause.biology.ualberta.ca/palmer.hp/pubs/92P+S/92P+S.htm [13] Heckerman. “Probabilistic interpretation for MYCIN’s certainty factors”, in *Uncertainty in Artificial Intelligence*, L.N. Kanal and J.F. Lemmer, eds., Elsevier / North-Holland, 1986 [14] S. Russell, P. Norvig. *Artificial Intelligence: a modern approach*, Prentice-Hall, 1995 [15] G. Shafer. *A Mathematical Theory of Evidence*, Princeton University Press, 1976 [16] P.P. Shenoy and G. Shafer. “Axioms for probability and belief-function propagation”, in *Uncertainty in Artificial Intelligence*, 4 (1990), 169-198. [17] W. Strauss. *Partial Differential Equations: An introduction*, Wiley, 1992
Evaluation of a System for Personalized Summarization of Web Contents* Alberto Díaz\textsuperscript{1}, Pablo Gervás\textsuperscript{2}, and Antonio García\textsuperscript{3} \textsuperscript{1} CES Felipe II – Universidad Complutense de Madrid firstname.lastname@example.org \textsuperscript{2} SIP – Universidad Complutense de Madrid email@example.com \textsuperscript{3} Departamento de Comunicación - Universidad Rey Juan Carlos firstname.lastname@example.org Abstract. Existing Web personalized information systems typically send to the users the title and the first lines of the chosen items, and links to the full text. This is, in most cases, insufficient for a user to detect if the item is relevant or not. An interesting approach is to replace the first sentences by a personalized summary extracted according to a user profile that represents the information needs of the user. On the other side, it is crucial to measure how much information is lost during the summarization process, and how this information loss may affect the ability of the user to judge the relevance of a given document. The system-oriented evaluation developed in this paper indicates that personalized summaries perform better than generic summaries in terms of identifying documents that satisfy user preferences. We also considered a user-centred qualitative evaluation indicating a high level of user satisfaction with the summarization method described, in consonance with the quantitative results. 1 Introduction Web content personalization is a technique for reducing information overload through the adaptation of contents to each type of user. A Web personalization system is based on 3 main functionalities: content selection, user model adaptation, and content generation. For these functionalities to be carried out, they must be based on information related to the user that must be reflected in his user model or profile [8]. Content selection refers to the choice of the particular subset of all available documents that will be more relevant for a given user, as represented in his user profile or model. User model adaptation is necessary because user needs change over time as a result of his interaction with information [1]. For this reason the user model must be dynamic to adapt to those interest changes. Content generation involves generating a new result web document that contains, for each selected document, some extract considered indicative of its content. Existing * This research has been partially funded by the Ministerio de Ciencia y Tecnología (TIC2002-01961). Web personalized information systems typically send to the users the title and the first lines of the chosen items, and links to the full text. This is in most cases insufficient for a user to detect if the item is relevant or not, forcing him to inspect the full text of the document. An interesting approach is to replace the first sentences sent as a sample of a document by a proper summary or extract. *Personalized summarization* is understood as a process of summarization that preserves the specific information that is relevant for a given user profile, rather than information that truly summarizes the content of the news item. The potential of summary personalization is high, because a document that would be useless if summarized in a generic manner may be useful if the right sentences are selected that match the user interest. If automatic summarization is to be used as part of a process of intelligent information access, it is crucial to have some means of measuring how much information is lost during the summarization process, and how that information loss may affect the ability of the user to judge the relevance of a given document with respect to his particular information needs. In this paper we focus on a system-oriented and user-centred evaluation of the content generation (summarization) process. Section 2 describes previous work. The multi-tier selection process employed for evaluation is described in section 3. Section 4 describes the personalised summarization method. The experimental set up and results are given in section 5. Section 6 outlines the main conclusions. ## 2 Relevant Previous Work Automatic summarization is the process through which the relevant information from one or several sources is identified in order to produce a briefer version intended for a particular user - or group of users - or a particular task [6]. This paper considers indicative summaries of single documents, intended to help the user to decide on the relevance of the original document. Summaries can be *generic*, if they gather the main topics of the document and they are addressed to a wide group of readers, or *user adapted*, if the summary is constructed according to the interests of the particular reader that the system is addressing. Techniques for selection of phrases extract segments of text that contain the most significant information, selected based on linear combination of the weights resulting from the application of a set of heuristics applied to each of the units of extraction. These heuristics may be *position dependent*, if they take into account the position that each segment holds in the document; *linguistic*, if they look for certain patterns of significant expressions; or *statistical*, if they include frequencies of apparition of certain words. The summary results from concatenating the resulting segments of text in the order in which they appear in the original document [4]. There are similar works that use personalized summaries in information retrieval. In this case, the personalization is based on the user query [7, 11]. In particular, in [11] the initial segment of the documents is compared with query oriented summaries using a IR system. The results are shown to the users as title and initial segment or title and automatic summary. The evaluation was performed with 50 TREC queries with 50 documents per query. Measures were taken on precision, recall, speed in the decision process, number of access to the full document and subjective opinion of the user about the received information (initial segment or summary). The results show that the query oriented summaries are significantly more effective than the initial segment for the information retrieval task. Work on evaluation of item summarization has already shown that indirect evaluation methods of summarization - where summaries are evaluated in terms of their ability to recreate the ranking obtained by the full items when submitted to a given information selection process - provide reasonable means of measuring the amount of information loss involved in summarization. In particular, the selection process used in [7] was keyword-based single-tier over a corpus of 5000 news items and 50 queries from the TREC collection. Generic and personalized summarization heuristics are considered. The results show that the query oriented summaries are better than the first sentences and the generic summaries. On the other side, existing literature provides different techniques for defining user interests: keywords, stereotypes, semantic networks, neural networks, etc. A particular set of proposals [1, 8] model users by combining long term and short term interests: the short term model represents the most recent user preferences and the long term model represents those expressed over a longer period of time. Various classification algorithms are available for carrying out content selection depending on the particular representation chosen for user models and documents. The feedback techniques needed to achieve a dynamic modeling of the user are based on feedback given by the user with respect to the information elements selected according to his profile. The information obtained in this way can be used to update accordingly the user models in representation had been chosen. 3 Multi-tier Content Selection The multi-tier content selection process [2] to be employed in this paper involves a domain specific characterization, an automatic categorization algorithm and a set of keywords (long-term model), and a relevance feedback tier (short-term model). The first tier of selection corresponds to a domain specific given classification (for digital newspapers, the assignment of news items to sections). For the second tier, the user enters a set of keywords - with an associated weight - to characterize his preferences. These keywords are stored, for each user \( u \), as a term weight vector \((k_u)\). For the third tier the user must choose - and assign a weight to them - a subset of the 14 categories in the first level of Yahoo! Spain. This information is stored as a matrix where rows correspond to general categories and columns correspond to users \((G_{gu})\). These categories are represented as term weight vectors \((g)\) by training from the very brief descriptions of the first and second level of Yahoo! Spain categories entries [5]. In the fourth tier, short-term interests are represented by means of feedback terms obtained from feedback provided by the user over the documents he receives [2]. The term weight vector for each user \((t_u)\) represents the short-term interests of that user, information needs that loose interest to the user over time, so their weight must be progressively decreased. Documents are downloaded from the web of a daily Spanish newspaper as HTML documents. For each document, title, section, URL and text are extracted, and a term weight vector representation for a document $d$ ($d_d$) is obtained by application of a stop list, a stemmer, and the $tf \cdot idf$ formula for computing actual weights [9]. Each document is assigned the weight associated with the corresponding specific category associated to it in the particular user model, which represents the similarity between a document $d$, belonging to a specific category $c$, and a user model $u$ ($s^c_{du}$). The similarities between a document $d$ and a general category $g$ ($s^g_{dg}$), between a document $d$ and the keywords of a user model $u$ ($s^k_{du}$), and between a document $d$ and a short-term user model $u$ ($s^t_{du}$) are computed using the cosine formula for similarity within the vector space model [9]: $$s_{dg} = sim(d_d, g) \quad s^k_{du} = sim(d_d, k_u) \quad s^t_{du} = sim(d_d, t_u)$$ \hspace{1cm} (1) The similarity between a document $d$ and the general categories of a user model is computed using the next formula: $$s^g_{du} = \frac{\sum_{i=1}^{14} G_{iu} s_{dg_i}}{\sum_{i=1}^{14} G_{iu}}$$ \hspace{1cm} (2) The results are integrated using a particular combination of reference frameworks. The similarity between a document $d$ and a user model $u$ is computed as: $$s_{du} = \frac{\delta s^c_{du} + \varepsilon s^g_{du} + \phi s^k_{du} + \gamma s^t_{du}}{\delta + \varepsilon + \phi + \gamma}$$ \hspace{1cm} (3) where Greek letters $\delta$, $\varepsilon$, $\phi$, and $\gamma$ represent the importance assigned to each of the reference frameworks -specific categories, general categories, keywords, and feedback terms, respectively. To ensure significance, the relevance obtained from each reference framework must be normalized. ### 4 Applying Long and Short Term User Models to Personalize Summaries Our system uses three phrase-selection heuristics to build summaries: two to construct generic summaries, and one for personalized summaries. To generate summaries a value is assigned to each phrase of the text being summarized, obtained as a weighted combination of the results of the three heuristics. This value is used to select the most relevant phrases, which will be used to form an extract of the news item later used as summary. The *position heuristic* assigns the highest value to the first five phrases (1, 0.99, 0.98, 0.95, 0.9) of the text [3]. These provide the weights $A_{pd}$ for each phrase $p$ of a news item $d$ using the position heuristic. These values are independent of the user $u$ being considered. Each text has a number of thematic words, which are representative of its content\(^1\). To obtain the $M$ most significant words of each document, documents are indexed to --- \(^1\) This set of content based keywords for a document should not be confused with the set of keywords specified by a user to define his interests. provide the weight of each word in each document using the $tf \cdot idf$ method [9]. The *thematic words heuristic* extracts the M non-stoplist most significant words of each text. To obtain the value for each phrase p within the document d using the thematic words heuristics ($B_{pd}$), the number of thematic words appearing in the phrase is divided by the total number of words in the phrase. This is intended to give more weight to sentences with a higher density of thematic words [10]. The values obtained in this way are also independent of the particular user u being considered. We have chosen M=8. The *personalization heuristic* boosts those sentences that are more relevant to a particular user model. The user model provides a vector of weighted terms ($k_u$) corresponding to the chosen keywords of the long-term model and a vector of weighted terms ($t_u$) corresponding to the feedback keywords of the short-term model. This information is used to calculate the similarity ($C_{pdu}$) between the user model u and each phrase p of news item d, assigning the final weight to the sentence as: $$C_{pdu} = \frac{\chi sim(p_{pd}, k_u) + \beta sim(p_{pd}, t_u)}{\chi + \beta}$$ \hspace{1cm} (4) where $p_{pd}$ is the term weight vector representing the phrase p of news item d, and sim is the cosine formula of the Vector Space Model [9]. The values resulting from each of the three heuristics are combined into a single value ($Z_{pdu}$) for each phrase p of each news item d for each user u: $$Z_{pdu} = \frac{\mu A_{pd} + \nu B_{pd} + \sigma C_{pdu}}{\mu + \nu + \sigma}$$ \hspace{1cm} (5) The parameters $\mu$, $\nu$ and $\sigma$ allow relative fine-tuning of the different heuristics, depending on whether position ($\mu$), thematic key words ($\nu$) or similarity to the user model ($\sigma$) is considered more desirable. Values of $\sigma$ determine the degree of personalization of the summaries: if $\sigma$ is 0, the resulting summaries are generic, and for $\sigma$ greater than 0 personalization increases proportionally to $\sigma$. Again, to ensure significance, the relevance obtained for each framework must be normalized. The summary is constructed by selecting the top 20% of the ranking of sentences by the value $Z_{pdu}$ and concatenating them according to their original order of appearance in the document. ## 5 Evaluation We have performed two kinds of evaluations. System-oriented evaluation is based on the precision and recall metrics obtained through different configurations of the system, and intends to identify which is the best way of carrying the content generation process through the effect in the selection process. User-centred evaluation collects the opinions of the users about the use of summaries instead of the complete news items. ### 5.1 System-Oriented Evaluation Experiments are evaluated over data collected for 106 users and the news items corresponding to three weeks – the 14 working days of the period 1st -19th Dec 2003 - of the digital edition of the ABC Spanish newspaper [2]. The set of users includes 18 lecturers, 4 teachers, 77 students and 7 professionals from no education areas. The students come from the fields of computer science, journalism and advertising. The average of news item per day is 78.5. To carry out the system-oriented evaluation, judgments from the user are required as to which news items are relevant or not for each of the days of the experiment. To obtain these judgments users were requested to check the complete set of news items for each day, stating for each one whether it was considered interesting (positive feedback) or not interesting (negative feedback). As the evaluation process involved an effort for the users, only 37.4 users per day actually provided judgments. Additionally, some users only perform feedback for less than 10 news items per day. These users have been eliminated for the evaluation in order to obtain more significant results. The final collection contains, on average, 28.6 user per day. For evaluating summarization, the effect of selection (formula (3) with $\delta = \varepsilon = \phi = \gamma = 1$) over the different types of summaries is measured. This involves checking what results are obtained, as compared with user judgments, if instead of selecting news items based on their full text they are selected based on the summaries. Normalized recall and precision are used as evaluation metrics, given the users binary relevance judgments are compared against the ranking provided by the system [9]. These metrics measure the difference between an ideal ranking, with the relevant documents at the top, and the actual ranking provided by the system. On the other hand, the recall and precision metrics are computed with respect a selected fixed number of documents and they don’t use the information about the ranking. Data are considered statistically significant if they pass the *sign-test*, with paired samples, at a level of significance of 5% ($p \leq 0.05$) [9]. ### 5.1.1 Experiment 1. Personalized Summaries The generation of personalized summaries (formula (5) with $\mu = \nu = 0$ y $\sigma = 1$) combines the long-term model (keywords provided by the user) and short-term model (feedback terms obtained from the interaction with the user). Several evaluation collections have been generated for each user. Each one of them is obtained by summarizing the complete set of original news items according to a particular method for generating personalized summaries of those indicated above (formula (4)). There is a collection for each user of personalized summaries generated using the short term model ($P_s(S)$: $\chi = 0, \beta = 1$), a different collection for each user generated using the long term model ($P_s(L)$: $\chi = 1, \beta = 0$) and a third different collection for each user generated using a combination of long term and short term models ($P_s(LS)$: $\chi = 1, \beta = 1$). In each case, values of normalized recall and precision have been computed. These experiments have been repeated for all users during the 14 days of evaluation. The results for the three types of personalized summaries have been compared only from the second day on, to allow for the fact that on the first day there is no short-term model based on user feedback. If different summarization methods lead to different degrees of loss of relevant information, the resulting rankings will differ amongst them in a proportional way. The results shown in Table 1 show that the combination of long and short term models for the generation of personalized summaries provides significantly better results than the use of each model separately, in terms of normalized precision (1.6% against long term only, 2.8% against short term only). As an additional result, it is observed that the short term model on its own is better than the long term model in terms of normalized precision (1.2%), though not significantly so. In terms of normalized recall, results are similar: significant improvement of the long term-short term combination over both short and long on their own, and non-significant improvement of short term only over long term. The use of both heuristics adjusts the summaries better to the preferences of the user, as shown by higher values of precision and recall. The slightly better results for the short term could be due to the fact that the terms introduced by the user in his long term model are in general too specific, whereas those obtained through user feedback are terms that appear in the daily news. Table 1. Normalized precision (P) and recall (R) for different combinations of long and short-term model for generating personalized summaries | | P | R | |----------------|-----|-----| | Ps(LS) | 0.592 | 0.684 | | Ps(S) | 0.583 | 0.678 | | Ps(L) | 0.576 | 0.674 | From here on, mentions of personalized summaries (Ps) refer to the personalization obtained by means of a combination of the long and short-term models. 5.1.2 Experiment 2. Heuristic Combination for Summary Generation Experiment 2 tests whether summaries obtained by using only the personalization heuristic are better in terms of precision (formula (3) with $\delta = \varepsilon = \phi = \gamma = 1$) with respect to information selected by the user than other summaries (including the first lines of the document) but worse than the complete news item. The following types of summaries are involved (formula (5) with (4) with $\chi = \beta = 1$): Fs (baseline reference), 20% first phrases of the corresponding news item; Gs, using generic heuristics ($\mu = 1$, $v = 1$, $\sigma = 0$); Ps, using personalization heuristics ($\mu = 0$, $v = 0$, $\sigma = 1$); GPs, using both types of heuristics ($\mu = 1$, $v = 1$, $\sigma = 1$). Several different evaluation collections – consisting each one of summaries obtained from the news items in the original collection by applying a different summarization method – are built for each user. The multi-tier selection process is applied to each one of these collections, using the corresponding user profile as source for user interests. In each case, the values of normalized recall and precision have been computed in experiments that have been repeated over the 14 days for all users. Table 2. Normalized precision (P) and recall (R) for news item (N), personalized (Ps), generic-personalized (GPs), generic (Gs) and first phrases (Fs) summaries | | N | Ps | GPs | Fs | Gs | |-----|------|------|------|------|------| | P | 0.603| 0.593| 0.584| 0.581| 0.577| | R | 0.694| 0.686| 0.680| 0.678| 0.675| Personalized summaries (Ps) offer better results (table 2) with respect to normalized precision and recall than generic-personalized summaries (GPs), though the difference is not significant. With respect to baseline summaries (Fs) and generic summaries (Gs) the difference is significant. Generic-personalized summaries (GPs) are better than baseline summaries (Fs), and baseline summaries (Fs) are better than generic summaries (Gs), but the differences involved are not statistically significant. Personalized summaries are worse than full news items (N) under the same criteria. This suggests that the personalization heuristic generates the summaries better adapted to the user, followed by a combination of all possible heuristics. Baseline summaries using the first lines of each news item are better than those generated by a combination of the position and keyword heuristics. For newspaper articles, the generic heuristic does not improve on simply taking the opening lines. This technique has been used in similar works with similar results. In [7] the query oriented summaries (title, location, thematic and query heuristics) obtained significant better average precision than generic summaries and first sentences, and the full document improve the adapted summaries but no significantly. In [11] the query oriented summaries show better effectiveness than the initial segment. 5.2 User-Centred Evaluation The qualitative user-centred evaluation was based on a questionnaire that users completed after using the system. In most questions there were 5 options to indicate the degree of satisfaction: very high, high, medium, low and very low. There were 38 users that completed the final evaluation. Users indicated that the summaries were of high or very high quality in 83.3% of the cases, with 5.6% of very low. Concerning the coherence and clarity of the summaries, the results were as follows: 81.1% valued them as high or very high, and 5.4% as low or very low. With respect to the ability of the system to avoid redundancies, evaluation was high or very high for 69.4% of the users, against 2.8% of low evaluation. At the same time, adaptation of the summary to the user profile was considered high by 59.5% of the users, and low or very low by 8.1%. The degree of adaptation of the summaries to the information needs was high or very high in 70.3% of the cases, and low or very low in 10.8%. Regarding the extent to which the summaries reflect the content of the original documents, for 81.1% of the users this extent was high or very high, and it was low or very low for 5.4%. Finally, 89.5% of the users consider that the main ingredients of the news item are represented in the summary. The other 10.5% indicated that at times the summaries were too brief to include them. Most users consider that the summaries are of high quality, coherent, and clear, and that they reflect the content and the main ingredients of the corresponding document. Most of them also consider, though to a lesser degree, that the summaries contain no redundancies and that they are well adapted to user profile and user needs. This positive evaluation indicates that the method of sentence selection for the construction of summaries is a valid approach for content generation in the face of possible problems of clarity, coherence and redundancy. Users indicate that they sometimes used the summaries to establish the relevance of a news item. This was said to be often so by 48.6% of the users, sometimes by 29.7% and few by 21.6%. Against these data, 89.2% of the users relied on the heading often, and 10.8% only did in some cases. The section heading was used sometime by 45.9%, often by 29.7%, few by 13.5% and none by 10.8%. The stated relevance was used sometimes by 35.1% of the users, few by 24.3%, none by 21.6% and often by 18.9%. Finally, the full news item was used few times by 51.4% of the users, some times by 29.7% and none by 18.9%. In conclusion, the summary becomes an important element for defining the relevance of a news item. 6 Conclusions We can conclude that personalized summaries that use a combination of long and short term models are better than other types of summaries in terms of normalized precision and recall. Full news item offer only a slight improvement against personalized summaries, which seems to indicate that the loss of information for the user is very small with this type of summary. Generic summaries perform very closely to summaries obtained by taking the first few lines of the news item. This seems to indicate that the position heuristic is overpowering the thematic word heuristic, which may be corrected by refining the choice of weights. Although a first-sentences approach may provide good results for indicative summarization, it does not do so well in terms of personalized summarization, where it is crucial to retain in the summary those specific fragments of the text that relate to the user profile. This explains why the generic-personalized summaries perform so poorly in spite of being a combination of good techniques: given a fixed limit on summary length, the inclusion of sentences selected by the generic heuristics in most cases pushes out of the final summary information that would have been useful from the point of view of personalization. The user centred evaluation further sanctions the concept that offering users summaries of the news items helps to decrease information overload on the users. As shown in these results, the possible problems of sentence extraction as a summary construction method do not affect performance in the present context of application. The fact the summaries are said to be employed by users much more often than the full original text or the stated relevance to determine how relevant a news item is to them justifies the content generation method described in this paper. We can conclude that user adapted summaries are a useful tool to assist users in a personalization system. Notwithstanding, the information in these summaries can not replace the full text document from an information retrieval point of view. References 1. Billsus, D. & Pazzani, M.J.: User Modeling for Adaptive News Access. User Modeling and User-Adapted Interaction Journal 10(2-3) (2000) 147-180 2. Díaz, A. & Gervás, P.: Adaptive User Modeling for Personalization of Web Contents. Third International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems (AH2004). LNCS 3137. Springer-Verlag (2004) 65-75 3. Edmundson, H.: New methods in automatic abstracting. Journal of the ACM 2(16) (1969) 264–285 4. Kupiec, J., Pedersen, O., Chen, F.: A trainable document summarizer. Research and Development in Information Retrieval (1995) 68–73 5. Labrou, Y. & Finin, T.: Yahoo! As an Ontology: Using Yahoo! Categories to Describe Documents. Proceedings of the 8th International Conference on Information Knowledgement (CIKM-99). ACM Press (2000) 180-187 6. Mani, I. & Maybury, M.: Advances in Automatic Text Summarization. The MIT Press (1999) 7. Maña, M., Buenaga, M., Gómez, J.M.: Using and evaluating user directed summaries to improve information access. Proceedings of the Third European Conference on Research and Advanced Technology for Digital Libraries (ECDL1999). LNCS 1696. Springer-Verlag (1999) 198–214 8. Mizarro, S. & Tasso, C.: Ephemeral and Persistent Personalization in Adaptive Information Access to Scholarly Publications on the Web. Second International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems (AH2002). LNCS 2347. Springer-Verlag (2002) 306-316 9. Salton, G.: Automatic Text Processing: The Transformation, Analysis and Retrieval of Information by Computer. Addison-Wesley Publishing (1989) 10. Teufel, S. & Moens, M.: Sentence extraction as a classification task. Proceedings of ACL/EACL Workshop on Intelligent Scalable Text Summarization. Madrid, Spain (1997) 58–65 11. Tombros, A. & Sanderson, M.: Advantages of query-biased Summaries in IR. Proceedings of the 21st ACM SIGIR Conference (1998) 2-10
Planetary Boundary-Layer Modelling and Tall Building Design Emil Simiu\(^1\) · Liang Shi\(^1\) · DongHun Yeo\(^1\) Received: 8 April 2015 / Accepted: 22 October 2015 / Published online: 12 November 2015 © Springer Science+Business Media Dordrecht (outside the USA) 2015 Abstract Characteristics of flow in the planetary boundary layer (PBL) strongly affect the design of tall structures. PBL modelling in building codes, based as it is on empirical data from the 1960s and 1970s, differs significantly from contemporary PBL models, which account for both “neutral” flows, and “conventionally neutral” flows. PBL heights estimated in these relatively sophisticated models are typically approximately half as large as those obtained using the classical asymptotic similarity approach, and are one order of magnitude larger than those specified in North American and Japanese building codes. A simple method is proposed for estimating the friction velocity and PBL height as functions of specified surface roughness and geostrophic wind speed. Based on published results, it is tentatively determined that, even at elevations as high as 800 m above the surface, the contribution to the resultant mean flow velocity of the component \(V\) normal to the surface stress is negligible and the veering angle is of the order of only 5°. This note aims to encourage dialogue between boundary-layer meteorologists and structural engineers. Keywords Boundary-layer meteorology · Brunt–Väisälä frequency · Conventionally neutral stratification · Planetary boundary layer · Tall structures 1 Introduction For structural engineering purposes, mean wind speeds in the turbulent planetary boundary layer (PBL) are currently modelled in North America and Japan by strictly empirical power laws developed essentially in the 1960s (Davenport 1965; Canadian Structural Design Manual 1971; AIJ recommendations for loads on buildings 2004; ASCE 7-10 Standard 2010). According to these models, wind speeds increase monotonically within the boundary layer up to the gradient height (the term “gradient height” being applied in such models to both... geostrophic and cyclostrophic winds), specified to be approximately 200–250 m above ground level for water surface exposures, 300–350 m for open terrain exposures, and 400–450 m for suburban terrain exposures. (The term “exposure” indicates that the surface roughness is uniform over a sufficiently long distance (the fetch) upwind of the structure of interest.) The power-law model further assumes that above the geostrophic height the flow is free of turbulence and the mean wind speed does not vary with height. Barotropic conditions are assumed. Using asymptotic methods, the following results were obtained in the 1960s and 1970s: (1) the PBL height $H \approx 0.25u_*/f$ ($u_*$ is the friction velocity, $f$ is the Coriolis parameter) (e.g., Csanady 1967; Blackadar and Tennekes 1968; Tennekes 1973; Simiu and Scanlan 1996), that is, about one order of magnitude greater than given in the power-law model; (2) the mean flow in the PBL can be represented as a spiral structure, with components $U(z)$ and $V(z)$ parallel and normal to the surface stress, respectively; (3) the variation with height of the $U(z)$ component is logarithmic up to the geostrophic height $H$, i.e., $U(z) = (u_*/k)\ln(z/z_0)$ ($k \approx 0.41$ is the von Kármán constant, $z_0$ is the aerodynamic roughness length); (4) the component $V(z)$ is vanishingly small throughout the surface layer, the height of which is $H_s \approx 0.1H$, implying that the resultant mean wind speed is approximately $U(z)$ and the logarithmic law may be used for structural design purposes up to an elevation $H_s$; (5) at the top of the PBL $|V(H)| \approx 5u_*/k$ (Csanady 1967); and (6) as an artifact of the asymptotic method used to derive these results, at all other elevations $V(z)$ vanishes, that is, $V(z) = V(H)\delta(H)$, where $\delta(H)$ is the Dirac delta function (see Eq. 23, Appendix 1), a result that is physically unrealistic and is commented upon in Appendix 1. Recently, computational fluid dynamics (CFD) has emerged as an approach that made possible the estimation of the variation of the component $V(z)$ with height. Equally importantly, it is now well established that the stratification of the free flow, the flow at elevations $z > H$, plays an important role in determining the characteristics of the PBL. According to Zilitinkevich and Esau (2002) and Zilitinkevich (2012), among others, neutrally-stratified flows can be either of the “truly neutral” or the “conventionally neutral” type. “Truly neutral” flows are characterized by a Kazanski–Monin surface buoyancy-flux parameter $\mu = 0$ and a non-dimensional number $\mu_N = N/|f| = 0$, where $N$ is the Brunt–Väisälä frequency. Zilitinkevich et al. (2007) note that “truly neutral flows are observed during comparatively short transition periods after sunset on a background of residual layers of convective origin, … are often treated as irrelevant because of their transitional nature, and are usually excluded from data analysis;” “neutrally stratified PBLs are almost always conventionally neutral,” that is, neutral and developing against a background stable stratification. They are characterized by $\mu = 0$, $\mu_N \neq 0$; typically $50 < \mu_N < 300$ (Zilitinkevich and Esau 2002; Zilitinkevich et al. 2007). Owing to strong mechanical (as opposed to thermal) turbulent mixing within the PBL, it is typically assumed for structural engineering purposes that, for strong winds, $\mu = 0$. For additional details, see Appendix 2. The failure of the asymptotic similarity approach to consider stable stratification flow immediately above the PBL results in the incorrect prediction in realistic (“conventional”) neutral barotropic PBL flows of height $H$, the cross-isobaric (veering) angle $\alpha_0$ and its variation with height, and the geostrophic drag coefficient $C_g = u_*/G$, where $G$ denotes the geostrophic wind speed. No current science-based information on the PBL is used at this time in tall-building design. This note has three objectives: first, it recapitulates progress achieved in recent decades in the understanding and quantification of PBL characteristics of interest in tall-building design. Second, it presents a contribution to the development of criteria for such design. Last, but not least, it identifies the needs of tall-building designers so that improved design criteria can be developed. 2 Integral Measures of the Conventionally Neutral PBL 2.1 Geostrophic Drag Coefficient $C_g$ and Cross-Isobaric Angle $\alpha_0$ For Zilitinkevich numbers typical of conventionally neutral flows (i.e., $0.5 \times 10^2 < \mu_N < 3 \times 10^2$), the dependence of the geostrophic drag coefficient $C_g = u_*/G$ and the cross-isobaric angle $\alpha_0$ upon the Rossby number $Ro = G/(|f|z_0)$ can be represented by the following expressions, based on measurements by Lettau (1962), $$C_g = 0.205/(\log_{10} Ro - 0.556), \quad (1)$$ $$\alpha_0 = (173.58/\log_{10} Ro) - 3.03 \quad (2)$$ (Kung 1966; Hess and Garratt 2002 p. 338). Curves plotted in Fig. 2a, b of Zilitinkevich and Esau (2002) closely match Eqs. 1 and 2. As shown in the following example, the quantities $G$, $C_g$, and $\alpha_0$ are obtained for any given $u_*$, $f$, and $z_0$ by using Eqs. 1 and 2. Example 1 Assume $z_0 = 0.03$ m (open terrain exposure, see ASCE 7-10 2010), $u_* = 2.5$ m s$^{-1}$, $f = 10^{-4}$ s$^{-1}$. Since $u_*$, $f$, and $z_0$ are given, $C_g = u_*/G$, and $Ro = G/(|f|z_0)$, the only unknown in Eq. 1 is the geostrophic wind speed $G$. Equation 1 yields $G \approx 83$ m s$^{-1}$. Equation 2 then yields $\alpha_0 \approx 20^\circ$. 2.2 PBL Height $H$ Zilitinkevich et al. (2007) proposed the following expression applicable to flows for which the Kazanski–Monin surface buoyancy flux parameter $\mu \approx 0$, $$\frac{1}{H^2} = \left[ \frac{f^2}{(C_R)^2} + \frac{N |f|}{(C_{CN})^2} \right] \frac{1}{u_*^2}, \quad (3)$$ where $C_R \approx 0.6$ and $C_{CN} \approx 1.36$. The non-dimensional form of $H$ is $$C_h(N, f) = Hf/u_* . \quad (4)$$ The application of Eqs. 3 and 4 is illustrated in the following example. Example 2 For $u_* = 2.5$ m s$^{-1}$, $f = 10^{-4}$ s$^{-1}$ and $\mu_N = 100$ (i.e., $N = 0.01$ s$^{-1}$), Eq. 3 yields $H \approx 3300$ m and $C_h \approx 0.13$. In contrast, according to asymptotic estimates (e.g., Csanady 1967), $H \approx 0.25 \times 2.5/10^{-4} = 6250$ m (see Appendix 1, Eq. 9). 3 PBL Flows for Different Surface Roughness Regimes Wind-speed fields are developed for structural engineering purposes under the assumption that the terrain has $z_0 \approx 0.03$ m over a sufficiently long fetch (i.e., that it corresponds in structural engineering terms to the category “open terrain exposure,” see, e.g., Simiu and Scanlan 1996). Since structures commonly do not have “open terrain exposure,” it is necessary to estimate, as functions of the surface roughness $z_{01} \neq z_0$, the friction velocity $u_{*1}$ and the geostrophic height $H_1$ in a storm event that induces in terrain with open exposure a friction velocity $u_*$. Such estimates are based on the fact that, in large-scale storms, the geostrophic wind speed $G$ is the same in both roughness regimes. Examples 3 and 4 consider, respectively, the cases of suburban and ocean versus open exposure. **Example 3** It was shown in the previous section that, given a surface with open exposure ($z_0 = 0.03 \text{ m}$), with $f = 10^{-4} \text{ s}^{-1}$, to a storm that produces a friction velocity $u_* = 2.5 \text{ m s}^{-1}$ there corresponds a geostrophic wind speed $G \approx 83 \text{ m s}^{-1}$. In accordance with the definition of $Ro$, for suburban terrain exposure ($z_{01} = 0.3 \text{ m}$ over a sufficiently long fetch), to $G = 83 \text{ m s}^{-1}$ there corresponds $\log_{10} Ro = \log_{10}[83/(10^{-4} \times 0.3)] = 6.44$. From Eq. 1, $C_g = 0.035$, so $u_{*1} = 83 \times 0.035 \approx 2.9 \text{ m s}^{-1}$, and the cross-isobaric angle is $\alpha_{01} \approx 24^\circ$. From Eq. 3 there follows $C_{h1} = 0.13$ and $H_1 = 2.9 \times 0.13/10^{-4} \approx 3800 \text{ m}$, vs. the asymptotic estimate $H = 7250 \text{ m}$ (Eq. 9). **Example 4** For ocean surfaces, assuming $G = 83 \text{ m s}^{-1}$ and $z_0 = 0.003 \text{ m}$, $\log_{10} Ro = \log_{10}[83/(10^{-4} \times 0.003)] = 8.44$, and $C_g \approx 0.026$, so $u_{*1} = 83 \times 0.026 = 2.15 \text{ m s}^{-1}$, and $\alpha_{01} \approx 18^\circ$. It follows that $H_1 = 2800 \text{ m}$ and $C_h = 0.13$ (vs. the asymptotic estimate $H = 5400 \text{ m}$). Note that a structure built near the coastline and exposed to a wind direction from the ocean will be subjected to winds corresponding to ocean surface exposure. The calculated heights $H$ of the PBL are approximately half their counterparts obtained by using asymptotic methods, and an order of magnitude greater than their counterparts specified in the ASCE 7-10 and other standards on wind loads. ### 4 Effect of Veering on PBL Flow: A Case Study Information on the variation with height $z$ of the velocity components $U(z)$ and $V(z)$ (and therefore of their resultant) and of the angle $\alpha(z) = \tan^{-1}[V(z)/U(z)]$ is currently obtained from CFD simulations. We now consider a simulation reported in Hess (2004), in which the coefficient $C_h$ and the height $H$ are denoted by $h_*$ and $z_t$, respectively (see Eq. 27, p. 320, and p. 321 therein), and $C_h \equiv h_* = 0.10$. Figures 1 and 2 show the dependence on height $z$ of $U(z)$ and $V(z)$, of their resultant, and of the angle $\alpha(z)$, as represented in Fig. 6 of Hess (2004). **Example 5** Consider the following parameters: $f = 10^{-4} \text{ s}^{-1}$, $N = 0.018 \text{ s}^{-1}$, so $\mu_N = 180$, and $z_0 = 0.3 \text{ m}$, $u_* = 1.5 \text{ m s}^{-1}$. It can be verified by using Eq. 3 that $C_h \approx 0.10$, so $H = 0.10 \times 1.5/10^{-4} = 1500 \text{ m}$. Further, the value $G = 41 \text{ m s}^{-1}$ yields $\log_{10}(Ro) = 6.14$, $u_*/G \approx 0.037$, to which corresponds $G = 41 \text{ m s}^{-1}$, and $\alpha_0 \approx 25^\circ$. For $z = 300 \text{ m}$, $z/H = 0.20$, and for $z = 800 \text{ m}$, $z/H = 0.53$. Figure 1 shows that the component $V$ (800 m) and, a fortiori, the component $V$ (300 m), have negligible contributions to the resultant mean wind speed, and that the veering angles $\alpha$ (300 m) and $\alpha$ (800 m) are approximately $2^\circ$ and $6^\circ$, respectively. Results for $C_h = 0.19$, based on Fig. 7 of Hess (2004), are also included in Figs. 1 and 2. ### 5 Conclusions Numerical results obtained for cases of interest for tall structure design and believed to be reasonably representative suggest that: 1. Mean wind speeds increase monotonically with height up to considerably higher elevations than those inherent in power-law models specified by current codes and standards. This can affect the design of structures with heights greater than the gradient heights specified in the ASCE 7-10 and other standards on wind loads. 2. The contribution to the resultant mean wind speed of the component $V(z)$ normal to the surface stress is negligible for elevations of the order of, say, 1 km and lower. 3. The veering angle was found to be small (e.g., approximately $2^\circ$ and $6^\circ$ for 300 m and 800 m elevations $z$, respectively). 4. Given a storm with winds characterized by the friction velocity $u_*$ at a location with surface roughness $z_0$, simple calculations allow the estimation of the friction velocity induced by the same storm at a nearby location where the surface roughness differs from $z_0$. Numerical examples presented herein illustrate these points. In the authors’ view efforts to improve current tall-building structural design practices would benefit from the dialogue this note attempts to initiate between PBL meteorologists and structural engineers. **Acknowledgments** The authors wish to express their appreciation to the reviewers for their thorough review and helpful comments. **Appendix 1: Mean Velocity Field Model Based on Classical Asymptotic Approach** The purpose of this Appendix is to show that the asymptotic approach yields a physically unrealistic representation of the variation of the velocity component $V$ with height. The starting point of the asymptotic approach is the partitioning of the neutral boundary layer into two regions, an (inner) surface layer and an outer layer. In the surface layer the shear stress $\tau_0$ induced by the boundary-layer flow at the Earth’s surface must depend upon the flow velocity at a distance $z$ from the surface, the roughness length $z_0$, and the density $\rho$ of the air, that is, $$\tau_0 \mathbf{i} = F(U \mathbf{i} + V \mathbf{j}, z, z_0, \rho), \tag{5}$$ where $U$ and $V$ are the components of the mean wind velocity along the $x$ and $y$ directions and $\mathbf{i}$, $\mathbf{j}$ are unit vectors. Equation 5 can be written in non-dimensional form $$\frac{U \mathbf{i} + V \mathbf{j}}{u_*} = \psi_{1x} \left( \frac{z}{z_0} \right) \mathbf{i} + \psi_{1y} \left( \frac{z}{z_0} \right) \mathbf{j}, \tag{6}$$ where $$u_* = \left( \frac{\tau_0}{\rho} \right)^{1/2} \tag{7}$$ is the friction velocity and $\Psi_1 = \Psi_{1x} \mathbf{i} + \Psi_{1y} \mathbf{j}$ is a vector function to be determined. Equation 6 is known as the *law of the wall*, which is applicable in the surface layer, and can be written in the form $$\frac{U \mathbf{i} + V \mathbf{j}}{u_*} = \psi_{1x} \left( \frac{z}{H} \frac{H}{z_0} \right) \mathbf{i} + \psi_{1y} \left( \frac{z}{H} \frac{H}{z_0} \right) \mathbf{j}, \tag{8}$$ where $$H = cu_*/f, \tag{9}$$ and $H$ denotes the boundary-layer depth, and on the basis of data available in the 1960s it was assumed in Csanady (1967) $c \approx 0.25$. The mean velocity components $U(H)$ and $V(H)$ are denoted by $U_g$ and $V_g$, respectively. Their resultant, denoted by $G$, is the magnitude of the geostrophic velocity. In the outer layer it can be asserted that, at height $z$, the velocity reduction with respect to $G$ must depend upon the surface shear stress $\tau_0$, and the air density $\rho$. The expression for this dependence in non-dimensional form is known as the *velocity defect law*, $$\frac{U \mathbf{i} + V \mathbf{j}}{u_*} = \frac{U_g \mathbf{i} + V_g \mathbf{j}}{u_*} + \psi_{2x} \left( \frac{z}{H} \right) \mathbf{i} + \psi_{2y} \left( \frac{z}{H} \right) \mathbf{j},$$ where $\Psi_2$ is a vector function to be determined. Consider, in Eqs. 6 and 10, the $x$ components $$\frac{U \mathbf{i}}{u_*} = \psi_{1x} \left( \frac{z}{H} \frac{H}{z_0} \right) \mathbf{i},$$ $$\frac{U \mathbf{i}}{u_*} = \frac{U_g \mathbf{i}}{u_*} + \psi_{2x} \left( \frac{z}{H} \right) \mathbf{i}.$$ From the observation that a multiplying factor inside the function $\Psi_{1x}$ must be equivalent to an additive function outside the function $\Psi_{2x}$ the following are obtained, $$\frac{U}{u_*} = \frac{1}{k} \left( \ln \frac{z}{h} + \ln \frac{H}{z_0} \right),$$ $$\frac{U}{u_*} = \frac{U_g}{u_*} + \frac{1}{k} \left( \ln \frac{z}{h} \right),$$ for the surface layer and the outer layer, respectively. From Eq. 13 it follows immediately $$\frac{U}{u_*} = \frac{1}{k} \ln \left( \frac{z}{z_0} \right).$$ By equating Eqs. 13 and 14 in the overlap region there results $$\frac{U_g}{u_*} = \frac{1}{k} \ln \left( \frac{H}{z_0} \right).$$ The logarithmic law is seen to apply to the $U$ component of the wind velocity throughout the depth of the boundary layer. Consider now the components $$\frac{V \mathbf{j}}{u_*} = \psi_{1y} \left( \frac{z}{H} \frac{H}{z_0} \right) \mathbf{j},$$ $$\frac{V \mathbf{j}}{u_*} = \frac{V_g \mathbf{j}}{u_*} + \psi_{2y} \left( \frac{z}{H} \right) \mathbf{j}$$ Csanady (1967), Blackadar and Tennekes (1968) and Tennekes (1973) assume $\Psi_{1y} \equiv 0$. Then, Eqs. 17 and 18 yield, in the overlap region, $$\frac{V_g}{u_*} + \psi_{2y} \left( \frac{z}{H} \right) = 0,$$ that is, $$\psi_{2y} \left( \frac{z}{H} \right) = -\frac{V_g}{u_*},$$ $$\psi_{2y} \left( \frac{z}{H} \right) = \frac{B}{k},$$ where, based on measurements available in the 1960s, it is assumed $B/k \approx 12$ (e.g., Csanady 1967). It follows from Eqs. 18 and 20a that $$V(z) = 0 \quad (z < H). \tag{21}$$ Since, for $z = H$, $V(H) = V_g$, Eq. 18 yields $$\Psi_{2y}(H/H) = 0 \quad (z = H), \tag{22}$$ and, by virtue of Eqs. 19 and 21, $$V(z) = V_g \delta(H), \tag{23}$$ where $\delta$ denotes the Dirac delta function. This physically unrealistic result is an artifact of the asymptotic approach, which transforms the actual profile $V(z)$ (of which two CFD-based estimates are represented in Fig. 1) into the non-physical profile represented by Eq. 23. **Appendix 2: Brunt–Väisälä Frequency and ‘Conventionally Neutral’ PBL Flow** According to research results cited by, among others, Zilitinkevich et al. (2007), the stratification, characterized by the free-flow Brunt–Väisälä frequency $N$, has a significant effect on the PBL. Based on the dependence of PBL flow upon both the buoyancy flux $\mu$ at the Earth’s surface and the free-flow Brunt–Väisälä frequency $N$, Zilitinkevich et al. (2007) classify neutral and stable PBL flows into four categories: (i) “truly neutral” ($\mu = 0, N = 0$); (ii) “conventionally neutral” ($\mu = 0, N > 0$), (iii) “short-lived stable,” ($\mu < 0, N = 0$), and (iv) “long-lived stable” ($\mu < 0, N > 0$). Of these four categories it is the “conventionally neutral” flow that is, in practice, of interest in structural engineering applications. An air parcel moving vertically is subjected to a gravitational force due to the variation of the air density with height, and the differential equation describing the vertical motion of the air parcel has an oscillatory solution. In the presence of a horizontal flow, the vertical oscillations result in a transport of momentum between the free flow and the PBL flow. As a result of this transport the PBL flow velocities are increased, thus causing a reduction in the height of the PBL with respect to the height of the “truly neutral” PBL. The decrease of the height $H$ as $N$ (i.e., the strength of the stratification) increases is reflected in Eq. 3. **References** AIJ recommendations for Loads on Buildings (2004) Chapter 6. Architectural Institute of Japan, 81 pp. http://www.aij.or.jp/jpn/symposium/2006/loads/loads.htm. Accessed 5 Jun 2015 ASCE 7-10 Standard (2010) Minimum design loads for buildings and other structures. American Society of Civil Engineers, Reston, 608 pp Blackadar AK, Tennekes H (1968) Asymptotic similarity in neutral barotropic planetary boundary layer. J Atmos Sci 25:2015–2020 Canadian Structural Design Manual (1971) Supplement No. 4 to the National Building Code of Canada. National Research Council of Canada, Ontario, 380 pp Csanady GT (1967) On the ‘resistance law’ of a turbulent Ekman layer. J Atmos Sci 24:467–471 Davenport AG (1965) The relationship of wind structure to wind loading. In: Symposium on wind effects on buildings and structures, vol 1. National Physical Laboratory, Teddington, Her Majesty’s Stationery Office, London, pp 53–102 Hess GD (2004) The neutral barotropic planetary boundary layer, capped by a low-level inversion. Boundary-Layer Meteorol 110:339–355 Hess GD, Garratt JR (2002) Evaluating models of the neutral, barotropic planetary boundary layer using integral measures: Part I. Overview. Boundary-Layer Meteorol 104:333–358 Kung EC (1966) Large-scale balance of kinetic energy in the atmosphere. Mon Weather Rev 94:627–640 Lettau HH (1962) Theoretical wind spirals in the boundary layer of a barotropic atmosphere. Beitr Phys Atmos 35:195–212 Simiu E, Scanlan RH (1996) Wind effects on structures, 3rd edn. Wiley, Hoboken, 688 pp Tennekes H (1973) The logarithmic wind profile. J Atmos Sci 30:234–238 Zilitinkevich SS (2012) The height of the atmospheric planetary boundary layer: state of the art and new development. In: Fernando H et al (eds), National security and human health implications of climate change. NATO science for peace and security series C: Environmental Security. Springer. doi:10.1007/978-0-4-007-2430-3_13 Zilitinkevich SS, Esau I (2002) On integral measures of the neutral barotropic planetary boundary layer. Boundary-Layer Meteorol 104:371–379 Zilitinkevich SS, Esau I, Baklanov A (2007) Notes and correspondence: further comments on the equilibrium height of neutral and stable planetary boundary layers. Q J R Meteorol Soc 133:265–271
To, The Commissioner, Customs, Ahmedabad, Jamnagar, Kandla, Mundra Sir, Sub: Circulation of letters for deputation – reg. Please find enclosed herewith following letters regarding deputation for various posts for information and further necessary action at your end please. | Sr. No. | Subject | Received from (S/Shri) | |---------|------------------------------------------------------------------------|-------------------------------------------------------------| | 1 | A post of Stenographer (Grade-II) in the office of the Competent Authority and Administrator, SAFEMA / NDPSA, Mumbai, in the Pay Band 2 – Rs. 9300-34800 + Grade Pay Rs. 4200/- is to be filled up on deputation basis – reg. Vide letter F.No. CA/MUM/Estt./3/2008/169 dated 11.02.2015. | (P.M. Govande) Competent Authority, SAFEMA/NDPSA, Mumbai. | | 2 | Filling up of post of Technical Officer – reg. Vide letter F.No. CESTAT/AHD/Misc/D.R./09/2013 dated 13.02.2015. | (Mohinder Singh) Deputy Registrar, CESTAT, Ahmedabad. | | 3 | Temporary posting of Superintendents to CESTAT (Court), Ahmedabad – m/reg. Vide letter F.No. 1/Commr.(AR)/CESTAT/AHD/06/2013 dated 05.01.2015. | (Raju), Commissioner (AR), CESTAT, Ahmedabad. | Yours faithfully, (M. Gnanasundaram) Additional Commissioner Encl: as above Establishment Circular No. 1/2015 A post of Stenographer (Grade-II) in the office of the Competent Authority and Administrator, SAFEMA / NDPSA, Mumbai, in the Pay Band 2 – ₹ 9300-34800 + Grade Pay ₹ 4200/-, is to be filled up on deputation basis from amongst the eligible and willing officers of Central and State Government. The post belongs to the General Central Service Group ‘C’ Non-Gazetted. Eligibility:- I. Stenographers under the Central or State Government Departments or Organizations. (i) holding analogous post; or (ii) with 8 years regular service in the grade of Stenographer Grade-III (iii) possessing a speed of 100 word per minute in stenography (English) NOTE: However, in the absence of candidates with sufficient service, officials with lesser service will be considered. The period of deputation including period of deputation in another ex-cadre post held immediately preceding this appointment in the same or some other Organization or Department of the Central Government shall ordinarily not exceed 3 years. The maximum age limit for appointment on deputation shall not exceed 56 years as on the closing date of receipt of application. The Selected candidates will have the option to draw the existing pay with deputation allowance or pay in the new pay scale, if selected to a higher post. Bio-data of the eligible and willing officers may be called for and forwarded to this office along with vigilance clearance and ACR grading for the last five years, so as to reach this office by 10.03.2015. (P.M. GOVANDE) COMPETENT AUTHORITY SAFEMA/NDPSA, MUMBAI Copy to: 1. The Chief Commissioner of Central Excise, Mumbai Zone-I/II/Nagpur/Pune/Ranchi/Ahmedabad Zone. 2. The Chief Commissioner of Customs, Mumbai-I/II/III/Ahmedabad Zone. 3. The Chief Commissioner of Income Tax, Administration In-charge, Mumbai, Ahmedabad 4. The Deputy Director, Enforcement Directorate, Mumbai 5. The Addl. D.I.G., C.B.I., Mumbai. 6. The Dy. Director, NCB, Mumbai. 7. The Narcotics Commissioner, Gwalior. 8. The Addl. D.G., Central Excise Intelligence, Mumbai. 9. The Addl.D.G., D.R.I. Mumbai 10. The Addl.D.G., C.B.I. Mumbai 11. The Chief Engineer, CPWD, Mumbai 12. The Dy. Secretary, General Administration in Mantralaya, Mumbai 13. The Under Secretary, CA Cell, North Block, New Delhi with a request to forward the circular to the Web - Master, CBEC website, Dte. of System, New Delhi to upload on CBEC Website (www.cbec.gov.in) at earliest. 14. Circular File. 15. The Deputy Secretary (Coordination), Competent Authority Cell, Ministry of Finance, Department of Revenue, North Block, New Delhi – 110 001. It is requested that a wider circulation of this letter may please be given among the other Departments /Sections in the Ministry to enable this office to fill up the post on deputation. No. F. CESTAT/AHD/Misc/D.R./09/2013 /80 Sub: Filling up of post of Technical Officer – Reg. Respected Sir, I am directed to state that the Registrar, CESTAT, New Delhi vide letter No. 42/CESTAT/RR/ 2000-Admn.Vil.II dated 12.02.2015 has invited the applications for the post of Technical Officer to be filled on deputation basis at CESTAT Mumbai/Chennai/Ahmedabad. Therefore, in view of above, please find enclosed herewith a copy of letter/circular No. 42/CESTAT/RR/2000-Admn.Vil.II dated 12.02.2015, which may kindly be circulated amongst the concerned suitable officers of your commissionerate/zones. Yours faithfully, (Mohinder Singh) Deputy Registrar Encl.: As above. To All Commissioners/Zonal Chief Commissioner of Gujarat Zone Copy to: 1. The Hon’ble Registrar, CESTAT, New Delhi for information. 2. The Officer of Authorised Representative, CESTAT, Ahmedabad. 3. Office Copy/Guard file. Subject (A) Send a copy to Customs Shri R.C. Sr.TA for r/a as directed Applications are invited for the post of Technical Officer, to be filled on deputation basis in various benches of this Tribunal: | Post, Pay Band & Grade Pay, No. of posts & place of posting | Eligibility conditions | |-------------------------------------------------------------|------------------------| | Technical Officer Rs. 9300-34800/- (PB-2) Grade Pay- Rs. 4,600/- (Mumbai/Chennai/Ahmedabad) | Superintendent of Central Excise and Appraisers of Customs or Inspector of Central Excise/Examiner of Customs Department having 8 years regular service. | The maximum age limit for appointment on deputation shall be 56 years. The application from the eligible/willing candidates may be forwarded with the bio data given in annexure – I along with ACR dossiers for the last 3 years (Attested photocopies only) (ii) Vigilance Clearance Certificate (iii) Identity Certificate (iv) Cadre Clearance certificate, showing no objection of the controlling authority to relieve the candidate in the event of selection (v) Certificate of major/minor penalties, if any imposed during the last 10 years (in case penalty has been imposed, a certificate to that effect may be furnished). Please reach the Registrar, Customs, Excise & Service Tax Appellate Tribunal, West Block No.2, R.K. Puram, New Delhi – 66 within 45 days from the date of receipt of the circular. No action on application received incomplete and without dossiers/vigilance clearance will be taken by this Tribunal. (A.Mohan Kumar) Registrar to: Hon’ble Member (J), CESTAT, Chennai – for giving wide publication to the circular. Hon’ble Member (J), CESTAT, Ahmedabad – for giving wide publication to the circular. Hon’ble Member (J), CESTAT, Mumbai – for giving wide publication to the circular. Zonal Chief Commissioners/Commissioners/Assistant Commissioners of Central Excise, Customs & Service Tax CESTAT Web site F.No. I/Commr.(AR)/CESTAT/AHD/06/2013 Date 05.01.2015 To, The Chief Commissioner, Central Excise Zone, Ahmedabad / Vadodara. The Chief Commissioner, Customs, Ahmedabad Sir, Sub: Temporary posting of Superintendents to CESTAT (Court), Ahmedabad – m/reg. Kindly refer to D.O. L. No.390/Misc./25/2013 dated 03.10.2013 (copy enclosed for ready reference) from Member, CBEC, New Delhi addressed to all Chief Commissioners of Customs and Central Excise. In view of this D.O. letter one Supdt. each from your office was posted to CESTAT (court), Ahmedabad for a period of six months to identify and bunch the pending cases to reduce the pendency. On completion of six months tenure, the officers have been sent back to their parent Commissionerates. The undersigned had a meeting with Member (J), CESTAT, Ahmedabad who desired to repeat the exercise of posting of Supdt. in CESTAT to reduce the pendency. Therefore, it is requested to post one Superintendent each from your Zone so as to reduce the pending cases as desired by the Member, CBEC, New Delhi in the above referred letter. Kindly do the needful in the matter. Encl As above. Yours faithfully, (RAJU) Commissioner (AR), CESTAT, Ahmedabad. Copy submitted to The Member (J), CESTAT, Ahmedabad for information. Dear Chief Commissioner, The rising pendency of Appeals in the Tribunal is a cause of concern. The revenue locked on account of pending litigation has increased substantially and immediate measures are required to ensure that revenue due to the Government is realized by early disposal of cases. Some of the issues pending in appeals are of recurring nature where identical or substantially similar issues of different periods are in dispute before the Tribunal. Some of these issues are of recurring nature while some of the appeals could be covered by decisions of the Supreme Court or High Courts. Therefore, there is a need to classify and bunch appeals so that the same are disposed of, batch wise, saving considerable time of the Tribunal benches. The President of the Tribunal has informed that the CESTAT Registry and supporting staff are not equipped to identify related appeals for bunching the cases issue-wise or on the basis of covered cases. He has requested for posting six Superintendent level officers from the Department on loan basis at Mumbai and New Delhi benches for a period of six months for identifying the cases and grouping them to ensure speedy disposal of cases. Similarly three Superintendent level officers may be posted to the benches at Chennai, Bangalore, Kolkata and Ahmedabad for a period of six months as a temporary measure. You are, therefore, directed to post two Superintendents each from Central Excise, Service Tax and Customs Zones at Delhi and Mumbai Bench who are well conversant with the subject for a period of six months to the Tribunal Registry. Similarly, for Tribunal Benches at Ahmedabad, Bangalore, Chennai and Kolkata, three Superintendents may be posted for a period of six months for this work. A copy of the deputation order may be sent to the office of Chief Commissioner (AR), CESTAT, New Delhi and the same may be endorsed to the Joint Secretary (Judicial Cell) for information. The posting of the Superintendents as mentioned above may be done in coordination/consultation with counterparts in the same city to ensure that the desired numbers of officers are posted for the job. With best wishes, Yours sincerely, (Sandhya Baliga) Chief Commissioners of Customs, Central Excise & Service Tax, Delhi, Mumbai, Chennai, Kolkata, Bangalore, Ahmedabad
Tap into a market with $3.5 billion in annual buying power. - SPRAYFOAM Professional - Membership Directory & Buyers’ Guide - SPRAYFOAM PRO Newswire FOR MORE INFORMATION, PLEASE CONTACT: www.sprayfoam.org Join a Growing Industry! • The SPF industry spend $3.5 billion per year on products and services. • Our members make up 1/2 of the industry’s total sales. • The US insulation market is expected to exceed $10 billion by 2018. • Residential construction has become the leading insulation consumer, accounting for about half of the total US insulation market. • Industry revenue is growing at an annual rate of 2.4% over the next five years, reaching $12.6 billion by 2021. Reach Key Players... ...by getting in front of decision-makers! Architects Contractors Purchasing & Marketing Directors Engineers Building Owners Technical Directors & Distributors Contact your Naylor representative today! ## 2018 Content Calendar* | Issue | Features/Topics | Deadlines | |----------------|---------------------------------------------------------------------------------|------------------------------------------------| | **Spring 2018**| Conference Preview Featured Speaker: Reid Ribbel: NRCA Keynote Speaker: Jeff Havens: Workplace Satisfaction Conference Sponsors Rick Duncan on Roofing & Insulation Codes updates Laverne Daiglish on Air Barrier Association of America Alaska: Sprayfoam distributed resources & demand response Julie Fornaro: Marketing Sprayfoam in the digital age | Reservation Deadline: November 29, 2017 Materials due: December 1, 2017 | | **Summer 2018**| Roofing Shows Flood Awareness Severe Weather Prep Low GWP Blowing Agents LCA/EPD Revision Project George Thompson Chemical Safety National Construction Week - OSHA Contractor Safety Program Post-Conference Recap | Reservation Deadline: March 9, 2018 Materials due: March 13, 2018 | | **Fall 2018** | Pacific Coast Builders Show Getting ready for the 2019 California Building Efficiency Standards, Title 24 October is National Fire Protection Week How un-vented attics can produce the spread of wildfires Ignition barriers and thermal barriers Rig fire safety programs How Design Teams Use Sprayfoam to Solve Problems | Reservation Deadline: June 27, 2018 Materials due: June 29, 2018 | | **Winter 2018**| Veterans Day: Veteran-Owned Business in the Sprayfoam Industry Transition between military and running a business PCP Reminders about Renewals & Certifications Greenbuild: Architectural Standards/Resources Net-Zero Messaging SPF Basics | Reservation Deadline: September 20, 2018 Materials due: September 24, 2018 | ### Departments: - Executive Director’s Corner - President’s Post - Letters from the SPF Community - Foam Business News - SPFA Today - Ask the Expert - Air Barrier Association of America (ABAA) News - Safety First - Speaking Sensibly - Legislative Update - Project Spotlight (National Industry Excellence Award Winners) - Behind the Foam - Technology’s Turn/Business Sense *Editorial Plan is tentative and subject to change* Net Advertising Rates All rates include an ad link in the digital edition of the directory. **Full-Color** | Position | Rate | |---------------------------------|--------| | Double Page Spread | $3,379.50 | | Outside Back Cover | $3,549.50 | | Inside Front or Inside Back Cover | $3,199.50 | | Full Page | $2,699.50 | | 2/3 Page | $2,309.50 | | 1/2 Page | $1,599.50 | | 1/3 Page | $1,049.50 | | 1/4 Page | $659.50 | | 1/6 Page | $449.50 | | 1/8 Page | $379.50 | **Black-and-White** | Position | Rate | |---------------------------------|--------| | Full Page | $1,699.50 | | 2/3 Page | $1,319.50 | | 1/2 Page | $1,089.50 | | 1/3 Page | $809.50 | | 1/4 Page | $489.50 | | 1/6 Page | $419.50 | | 1/8 Page | $329.50 | Naylor charges a $50 artwork surcharge for artwork creation or changes. This additional fee will appear on your final invoice if the artwork submitted is not publishing ready. **Digital Edition Branding Opportunities** - **Skyscraper** | $825 - **Sponsorship Max** | $720 - **Sponsorship** | $515 - **Toolbar** | $360 Online Specifications - For more information, visit: www.naylor.com/clientSupport-onlineGuidelines.asp Advertiser indemnifies Naylor, LLC and the Association against losses or liabilities arising from this advertising. Naylor, LLC assumes no liability whatsoever, except to the extent of a one-time paid advertisement of the same specification, in the next or similar publication, if any proven or admitted errors or omissions have occurred. Payment is due upon receipt of the invoice. Interest shall be charged at 2% per month compounded to yield 26.82% per year on overdue accounts. Revisions to previously submitted ad copy are subject to additional charges. A charge of $50.00 will be levied for returned checks. In the event of a contract cancellation, the advertiser or agency agrees to repay Naylor, LLC any discounts granted for multiple insertions less any discount applicable for the number of insertions completed in the contract. All cancellations must be received in writing by Naylor, LLC prior to the advertising sale deadline. All premium positions are non-cancelable. Prices are net of agency commission. Ads may also appear in another version of the publication(s). (Rates as of July 2017) ## Net Advertising Rates All rates include an ad link in the digital edition of the magazine or directory. ### Full-Color Rates | | 1x | 2-3x | 4x | Directory | |----------------------|--------|--------|--------|-----------| | **Double Page Spread** | $3,379.50 | $3,209.50 | $3,039.50 | $3,379.50 | | **Outside Back Cover** | $3,549.50 | $3,409.50 | $3,279.50 | $3,549.50 | | **Inside Front or Inside Back Cover** | $3,199.50 | $3,059.50 | $2,929.50 | $3,199.50 | | **Full Page** | $2,699.50 | $2,559.50 | $2,429.50 | $2,699.50 | | **2/3 Page** | $2,309.50 | $2,189.50 | $2,079.50 | $2,309.50 | | **1/2 Page** | $1,599.50 | $1,519.50 | $1,439.50 | $1,599.50 | | **1/3 Page** | $1,049.50 | $999.50 | $939.50 | $1,049.50 | | **1/4 Page** | $659.50 | $629.50 | $589.50 | $659.50 | | **1/6 Page** | $449.50 | $429.50 | $399.50 | $449.50 | | **1/8 Page** | $379.50 | $359.50 | $339.50 | $379.50 | ### Black-and-White Rates | | 1x | 2-3x | 4x | Directory | |----------------------|--------|--------|--------|-----------| | **Full Page** | $1,699.50 | $1,609.50 | $1,529.50 | $1,699.50 | | **2/3 Page** | $1,319.50 | $1,249.50 | $1,189.50 | $1,319.50 | | **1/2 Page** | $1,089.50 | $1,039.50 | $979.50 | $1,089.50 | | **1/3 Page** | $809.50 | $769.50 | $729.50 | $809.50 | | **1/4 Page** | $489.50 | $469.50 | $439.50 | $489.50 | | **1/6 Page** | $419.50 | $399.50 | $379.50 | $419.50 | | **1/8 Page** | $329.50 | $309.50 | $299.50 | $329.50 | Naylor charges a $50 artwork surcharge for artwork creation or changes. This additional fee will appear on your final invoice if the artwork submitted is not publishing ready. ### Digital Edition Branding Opportunities | | Price | |------------------------|--------| | **Skyscraper** | $825 | | **Sponsorship Max** | $720 | | **Sponsorship** | $515 | | **Leaderboard (Magazine Only)** | $565 | | **Toolbar** | $360 | Online Specifications - For more information, visit: www.naylor.com/clientSupport-onlineGuidelines.asp *Please ask your Naylor sales representative for information regarding additional advertising opportunities within the construction industry.* EXTEND YOUR PRINT ADVERTISING INVESTMENT WITH THE UNIQUE BENEFITS OF DIGITAL MEDIA Fencepost Magazine is also available to members in a fully interactive digital magazine. Our digital magazine is mobile responsive and HTML5 optimized, providing readers with an exceptional user experience across all devices. THE DIGITAL MAGAZINE LETS YOU: - Include ads on an HTML 5 and mobile responsive platform - Link your ad to the landing page of your choice - Interact with viewers to facilitate the buying process - Generate an immediate response from customers - Members and readers receive each issue via email and each new issue is posted on AFA’s website. A full archive of past issues is available, ensuring longevity for your online presence Mobile & Desktop Responsive HTML Reading View In-Magazine Digital Options (HTML reading view) These standalone ad options are placed between article pages on the HTML reading view of the digital magazine and are visible on all device types. **HTML5 Ad | $1,100** This mobile responsive ad option gives you the freedom to include text, images, hyperlinks and video across a variety of devices. Full design must be provided by the advertiser at this time. **Digital Video Sponsorship | $300** The video sponsorship option displays a video, 50-70 words of summary content and a hyperlink to deliver your message to target audiences. **Digital Inserts** Your message appears as an image-based insert, either in between key articles, or placed at the back of the digital magazine. - Half-Page Insert | $500 - 2/3 Page Outsert | $650 --- Naylor charges a $50 artwork surcharge for artwork creation or changes. This additional fee will appear on your final invoice if the artwork submitted is not publishing ready. For the latest online specs, please visit [www.naylor.com/onlinespecs](http://www.naylor.com/onlinespecs) Print Advertising Specifications Directory/Magazine Trim Size: 8.375" x 10.875" Double Page Spread Bleed 17" x 11.125" Full Page Bleed 8.625" x 11.125" Full Page No Bleed 7" x 9.5" 2/3 Page Horizontal 7" x 6.333" 2/3 Page Vertical 4.583" x 9.5" 1/2 Page Horizontal 7" x 4.583" 1/2 Page Long Vertical 3.333" x 9.5 1/2 Page Vertical 4.583" x 7" 1/3 Page Square 4.583" x 4.583" 1/3 Page Horizontal 7" x 3" 1/3 Page Vertical 2.166" x 9.5" 1/4 Page Horizontal 4.583" x 3.333" 1/4 Page Vertical 3.333" x 4.583" 1/6 Page Horizontal 4.583" x 2.166" 1/6 Page Vertical 2.166" x 4.583" 1/8 Page Horizontal 3.333" x 2.166" 1/8 Page Vertical 2.166" x 3.333" Roster Trim Size: 5.75" x 8.5" Double Page Spread Bleed 11.75" x 8.75" Full Page Bleed 6" x 8.75" Full Page No Bleed 5" x 7.5" 2/3 Page Horizontal 5" x 4.916" 1/2 Page Horizontal 5" x 3.666" 1/2 Page Vertical 2.333" x 7.5" 1/3 Page Horizontal 5" x 2.333" 1/3 Page Vertical 2.333" x 4.916" 1/4 Page Horizontal 5" x 1.666" 1/4 Page Vertical 2.333" x 3.666" 1/6 Page Horizontal 2.333" x 2.333" 1/8 Page Horizontal 2.333" x 1.666" Specs for Outsert/Inserts Directory/Magazine 1 Pg / 1 Surface 8.375" x 10.875" 2 Pg / 4 Surface 8.375" x 10.875" Heavy Card Stock Insert 8.25" x 10.75" 1 Pg / 2 Surface 8.375" x 10.875" Postcards 6" x 4.25" Postal flyersheets 8.5" x 11" Roster 1 Pg / 2 Surface 5.75" x 8.5" 3 Pg / 6 Surface 5.75" x 8.5" Postal flyersheets 5.75" x 8.5" Heavy Card Stock Insert 5.25" x 8.25" Artwork Requirements All digital color and greyscale artwork must be supplied at 300 dpi. Line art must be supplied at 600 dpi. High-res PDF, EPS, TIFF and JPEG files are accepted. Images from the Web are not suitable for printing. All color artwork must be in CMYK mode; black-and-white artwork must be in either greyscale or bitmap mode. RGB mode artwork is not accepted and if supplied will be converted to CMYK mode, which will result in a color shift. All screen and printer fonts as well as linked images must be supplied if not embedded in the file. Ad Material Upload Go to the Naylor website at www.naylor.com Proofs and Revisions Naylor charges a $50 artwork surcharge for artwork creation or changes. This additional fee will appear on your final invoice if the artwork submitted is not publishing ready. Note: Text placed outside the live area within any full-page or DPS ads may be cut off. Please keep text within the live area at all times. Directory/Magazine: DPS Live Area: 15.417" x 9.5" Full-Page Live Area: 7" x 9.5" Roster: DPS Live Area: 10.75" x 7.5" Full Page Live Area: 5" x 7.5" Digital Edition - For more information, visit: www.naylor.com/clientSupport-onlineGuidelines.asp Digital Edition - www.naylornetwork.com/spf-directory In addition to print, the *Membership Directory & Buyers’ Guide* is available in a digital version. Viewers can flip through the pages, forward articles to colleagues and click ads to be redirected to advertiser’s websites. *The directory is emailed to readers as well as posted on SPFA’s website. An archive of directories is available, securing your ad a lasting online presence.* **Readers can:** - Bookmark pages and insert notes - Keyword search the entire directory - Navigate and magnify pages with one click - Read online or download and print for later - View instantly from most smartphones and tablets - View archives and find a list of sections for one-click access **Extend your advertising investment with digital media:** - Link your ad to the landing page of your choice - Increase website traffic - Interact with viewers to help the buying process - Generate an immediate response from customers The Digital Edition of our 2016-2017 *Membership Directory & Buyers’ Guide* averaged nearly 45,000 pageviews! **Ad Positions** **Digital Sponsorship Max | $720** **Digital Sponsorship | $515** Your message will be prominently displayed directly across from the cover of the directory. Animation and video capabilities are available. Video capabilities not available for Sponsorship Max. **Digital Toolbar | $360** Your company name is a button on the toolbar, found in the top-right corner of every page next to frequently used navigational icons. When viewers click the button, a box containing text about your company and a link to your website appears. **Digital Skyscraper | $825** The skyscraper ad is displayed the entire time the digital edition is open, giving your message constant and lasting exposure. Online Specifications - For more information, visit: www.naylor.com/clientSupport-onlineGuidelines.asp Naylor charges a $50 artwork surcharge for artwork creation or changes. This additional fee will appear on your final invoice if the artwork submitted is not publishing ready. Past Advertisers Our communications program is made possible solely through advertiser support. We appreciate the investment that our advertisers make with the Spray Polyurethane Foam Alliance and strongly encourage our members to do business with vendors that support our association. Members know they can confidently select the quality products and services featured within the official resources of SPFA. Accella Polyurethane Systems, LLC BASF Corporation Bayer Material Science Building Performance Institute Building Professionals Bullard Burtin Polymer Laboratories, Inc. / Foametix C.J. Spray, Inc. Certainteed Corporation CertainTeed Machine Works Chemours Company (Dupont) Christian Fabrication Chromaflo Technologies Corp Coating & Foam Solutions, LLC Coating Holdings Ltd. Convenience Products Covestro Demilec (USA), LLC Diamond Liners, Inc. Dow Building Solutions Dr. Energy Saver Exova Fi-Foil Company, Inc. Foam Material and Equipment Foam Supplies, Inc. Gaco Western Inc. Global Specialty Products - USA, Inc. Graco Honeywell Performance Materials and Technologies Huntsman Icynene, Inc. IDI Distributors, Inc. of Minnesota Inside Out Maintenance, Inc. International Fireproof Technology, Inc. International Pump Manufacturing, Inc. JobProTechnology Johns Manville Insulation KARNAK Corporation LaPolla Industries, Inc. Light Engineering, Inc./ Next Generation Power Engineering Lucas Products MCC Equipment & Service Center NCFI Polyurethanes Oak Ridge Foam & Coating Systems, Inc. Polyurethane Machinery Corporation (PMC) Premium Spray Products, Inc. Quadrant Urethane Technologies R&D Services, Inc. R.K. Hydro-Vac, Inc. RHH Foam Systems Inc. Rhino Linings Corporation Schmidt & Dirks Design Inc. SES Foam Shanghai Dongda Polyurethane Co., Ltd. Sharemy Sales & Service Smart Choice Insulation & Roofing, Inc. Specialty Products, Inc. (SPI) Spray Foam Distributors Spray Foam Equipment & Manufacturing Spray Foam Gear Spray Foam Nation/Spray Polyurethane Par SprayWorks Equipment Group, LLC Therma-Stor, LLC Thermo Foam Systems Tiger Foam Insulation Ultra-Aire Vitaflex Yutzy Enterprises About the eNewsletter Now more than ever, professionals consume information on the go. Our *SPRAYFOAM PRO Newswire* allows members to stay informed about timely industry topics and association news whether they are in the office or on the road. Enjoy the benefits of a targeted eNewsletter: - Delivers your message directly to the inbox of sprayfoam industry decision-makers every other Tuesday - In addition to being delivered to SPFA members, opt-in subscription means that professionals in the market for your products and services see your message - Frequently forwarded to others for additional exposure - Cross-promoted in other SPFA publications and communications pieces - Directs visitors to the landing page of your choice to facilitate the purchasing process - Archives are accessible for unlimited online viewing - Limited available ad space makes each position exclusive - Change artwork monthly at no additional cost to promote time-sensitive offers and events - Reach SPFA members who represent $400 million in annual buying power - The *SPRAYFOAM PRO Newswire* gets 3,000 impressions per month --- **Horizontal Banner (468 x 60 pixels)** 12 Months | $4,650 - Eight spots available – NO ROTATION - Located between popular sections of the eNewsletter --- **Distributed Biweekly** Sections include: - Hot Topics - News Briefs - Upcoming Events - Industry News - Member News --- **Online Specifications** **Horizontal Banner** - 468 x 60 pixels - JPG only (no animation) - Max file size 100 KB --- Naylor charges a $50 artwork surcharge for artwork creation or changes. This additional fee will appear on your final invoice if the artwork submitted is not publishing ready.
Is RSA Really Secure? Using Repunits We Prove Otherwise Mehran Davoudi Ayaz Isazadeh Department of Computer Science Tabriz University, Tabriz, IRAN firstname.lastname@example.org email@example.com Abstract Prime numbers play a critical role in encryption algorithms. The security of RSA relies on an assumption: We hope there is no algorithm capable of factorize a big number in polynomial time. A RSA public key constitute of a few prime factors for more security. We use this fact as a constraint to simplify the problem of factorizing RSA public key as follows: "In a RSA public key some factors is near to all factors. In this paper we find Repunit numbers\(^1\) as an interesting set of numbers. We show that the set of all repunits contains all of prime numbers. We provide some theorems to show the relation between repunits and prime numbers. As an advantage of using abbreviated form of repunits we notice their low space complexity. Then we introduce our method to factorize big numbers based on repunits and their abbreviated form. 1. Introduction Primes, primes, primes. Why are they so important for us? Why factorizing numbers to primes are so interesting. We live in a digital world and we need to feel secure. I want to emphasize the fact that we owe this security to some features of prime numbers which even has not proofed yet! We are feeling secure while there is no proof to deny existence of polynomial algorithm to factorize a number mathematically. 1.1 Problem In RSA we trust that there is no fast algorithm to factorize a big number to prime numbers in a polynomial time. Factorization algorithms are designed to factorize numbers into primes. But in RSA, we are satisfied even if we have some factors of the big number\(^2\). In fact in RSA public keys some factors is near to all factors, because factors are so large and the number of them is really few (We can consider it 2). Providing this idea we declare the problem as: How can we factorize a big number to some factors? The idea provided by this paper claim a new method (Repunit Method) for factorization algorithms. We show the relation between repunits and prime numbers. Having introduced this relation we propose some methods to factorize a big number which usually used as RSA public keys. There are some few papers published in the domain of repunit numbers. Most of discussion around the repunits focuses on Prime Repunits. In fact the race of finding bigger prime repunit attracts the most energy of repunit workers![1][5][6] Chris K. Caldwell and Harvey Dubner [1] used repunits to find Unique period primes. S. Yates [5, 6] also have done lots of jobs for Unique period primes. A. Slinko [3] used repunits to find Absolute primes. Also he presented some useful properties for repunits. He used repunits to find absolute primes and also he presented some theorems which are useful to test primeness of a repunit. Unfortunately there is not any paper to show the strong relation between repunits and primes. This is the topic which we are going to invest in this paper. 1.2 Terminology In this section we introduce elementary definitions and abbreviations which we use in the paper. \(^1\)Numbers like $1, 11, \ldots$ or formally $R_n = (10^n - 1)/9$ for $n \geq 1$ \(^2\)A big number is considered as a big composite number with few big factors. • **Repunit number.** In recreational mathematics, a repunit is a number like 11, 111, or 1111 that contains only the digit 1. The term stands for repeated unit and was coined in 1966 by A.H. Beiler. The repunits are defined mathematically as: \[ R(n) = \frac{10^n - 1}{9} \quad \text{for } n \geq 1 \] • **Big number.** A big number is considered as a big composite number with few big factors. In this paper we assume we have big numbers with no factor of 2, 3 and 5. In fact finding this factors is easy so we can ignore them easily. ## 2 Previous Works This chapter contains the basic algorithms and theorems which the idea of this paper is based on. First of all we explore some of the most important available *prime factorization* algorithms. Then we propose a few familiar theorems with their proofs so we can cite them in the next sections. ### 2.1 Simple Factorization Algorithm The simplest algorithm to factorize a number is a *brute force* algorithm. Algorithm 2.1 checks if the number is divisible to any of the numbers lower than it. ``` SimpleFactorizing(n:integer):Array AnswerList = ∅ i=2 Do If i mod n = 0 Then AnswerList = AnswerList ∪ i and End If i = i+1 // Get Next factor to test. Loop While( i < n ) Return AnswerList ``` **Algorithm 2.1** The famous algorithm from Euclid to factorize numbers is very simple and construct the base of our methods. **Proposition 2.1** Consider about optimizations we can do on the algorithm 2.1 to improve its running time. First we can decrease the upper bound of probable divisors to $\sqrt{n}$ instead of $n$. But a better optimization is to select more appropriate divisors. There is no need to test all numbers lower than $n$ to find its factors. If we choose just the primes lower than $n$ (or even $\sqrt{n}$) we satisfy the correctness of algorithm and nothing is lost yet. So we should have a list of primes ready, but we are in middle of finding primes in this algorithm how could we have the prime list? This is the problem which forces us to test all divisors lower than $n$. The Repunit Method tries to use an implicit list of primes in middle of finding primes. ### 2.2 Finding GCD Algorithms Calculating GCD of two numbers is so important for us. We use it as a primary operation in next sections. So we need to know its computing complexity and also have an appropriate algorithm to do it. **Theroem 2.2** The complexity of finding the $GCD(n,m)$ for $n \leq m$ is $O(\log n)$. Here comes the famous *Euclid’s algorithm* to calculate $GCD(m,n)$. ``` EuclidGCD(m, n) If n = 0 Return m Else Return EuclidGCD(m, m mod n) ``` **Algorithm 2.2** Euclid’s algorithm for calculating GCD. **Theroem 2.3** Complexity of calculating $GCD(n,m)$ (using Euclid’s algorithm) for $n \leq m$ is $O(\log n)$. \[ O(GCD(m,n)) = O(\log n) \] **Proof.** It can be proved in various ways. All proofs are really straight and interesting. Also there is a good proof at [4] using Fibonacci numbers. The *Binary GCD algorithm* described by Knuth [2] as a practically fast algorithm: “The binary GCD algorithm is an algorithm which computes the greatest common divisor of two nonnegative integers. It gains a measure of efficiency over the ancient Euclidean algorithm by replacing divisions and multiplications with shifts, which are cheaper when operating on the binary representation used by modern computers. This is particularly critical on embedded platforms that have no direct processor support for division…” 2.3 Period of a Number The reciprocal of every prime \( p \) (other than two and five) has a period, that is the decimal expansion of \( 1/p \) repeats in blocks of some set length [1]. This period is *period of \( p \)*, for example: \[ \frac{1}{11} = 0.09 \quad \frac{1}{7} = 0.142857 \quad \frac{1}{13} = 0.076923 \] The first and basic algorithm to find period of a number is to do the usual divide method just like high school, after finding a repetitive remainder the quotient is the period of divisor. But how long should we stay in the loop. Or even, is endless loop possible? In theorem 3.4 we prove that any number have its own period. 3 The Repunit Method In this section we introduce our factorization method based on repunit numbers. We then, present some related algorithms to facilitate to work with big repunit numbers. 3.1 Usages of Repunits This section starts proposing theorem 3.1 which shows the relation between repunits and prime numbers. Then we define a new definition called *Admissible Repunit* to a number. Finally we introduce *Repunit Method* while necessary definitions are declared. 3.1.1 Repunits and Primes Relationship **Theorem 3.1** For a given number \( p \) having prime factors except 2, 3 and 5 there is at least one repunit number \( R_n \) which \( p \mid R_n \). *Proof.* Let’s declare \( x \) as follows: \[ x = \frac{1}{p} \] As \( x \in Q \) so we can write \( x \) in decimal form as follows: \[ x = 0.b_1b_2...b_n\overline{a_1a_2...a_m} \] Theorem 3.3 shows that \( m \) exists and is not infinity. Let’s do the simple high school method to find \( p \) from its decimal form \( x \), it is as simple as follows: \[ 10^{n+m}x - x = b_1...b_n\overline{a_1a_2...a_m}\overline{a_1...a_m} \] \[ x = \frac{b_1...b_n\overline{a_1...a_m}}{10^{n+m} - 1} \] We know that \( x = \frac{1}{p} \), so \( p = \frac{1}{x} \) and \( p \) is a natural number. \[ p = \frac{10^{n+m} - 1}{b_1...b_n\overline{a_1...a_m}} \] So \[ 10^{n+m} - 1 = (b_1...b_n\overline{a_1...a_m})p \] \[ 9R_{n+m} = (b_1...b_n\overline{a_1...a_m})p \] Due to assumption \( p \) has not 3 as its divisors, so it can not count 9. Therefore period of \( p \) count 9. In another words \( \frac{b_1...b_n\overline{a_1...a_m}}{9} \) is a natural number, let’s name it \( k \) and then we have: \[ R_{n+m} = kp \] So we constructed a repunit \( R_n \) that \( p \mid R_n \). We use the notation **RelatedRepunit(n)**, as a function that returns the related repunit for a given number \( n \). Algorithm 3.2 shows how to compute **RelatedRepunit(n)**. **Corollary 3.2** Lets probe theorem 3.1 contrary. If there is a repunit for each prime number, so the set of all repunits contains all prime numbers. As a result of theorem 3.1 the following set (**Repuniset**) covers all prime numbers in the world. \[ Repuniset = \{11, 111, 1111, \ldots\} \] and for each prime \( p \) we have this: \[ p \mid \prod_{i=1}^{\infty} R_i \] 3.1.2 Related Repunit Algorithms In this section we probe the algorithms to find Related Repunit of a number. First in algorithm 3.1 we notice how to find the number of digits of related repunit. Having **RelatedRepunitIndex(n)** function makes it easy to construct the related repunit algorithm. Algorithm 3.2 shows its simple steps. It is obvious that bottlenecks of Algorithm 3.1 are these lines: \[ If(Remainder \in RemaindersList) \] \[ RemaindersList = RemaindersList \cup \{ Remainder \} \] The complexity of checking whether \( Remainder \) is already a member of \( RemainderList \) depends on being the list sorted or unsorted. Considering it sorted it takes \( O(\log n) \) using binary search, otherwise it is \( O(n) \). The fact of \( RemaindersList \) being sorted or unsorted depends on the second line, the method we add \( Remainder \) to \( RemaindersList \) in implementation. If we want it to keep sorted we can use *Insertion Sort* method at each step. So at each step we do insertion sort with complexity of \( O(n) \) in worst case (we hope it to be much better in real). But in simple adding the Complexity is \( O(1) \). Algorithm 3.1 This algorithm returns the number of digits of RelatedRepunit(n). RelatedRepunit(n:integer):integer Index = RelatedRepunitIndex(n) Return \((10^{Index} - 1)/9\) Algorithm 3.2 This algorithm returns RelatedRepunit(n). Having theorem 3.3 limited the floor of remainders count, we can use a faster algorithm. The faster way is to keep a bit map for remainders have seen so far. In this way the computational complexity for each step is \(O(1)\) and a total computation complexity of \(O(n)\). Reaching complexity of \(O(n)\) makes us to have space complexity too. In fact we need to store an array of \(n\) bits to check if a remainders is seen before. So the space complexity for Algorithm 3.1 is \(O(n)\). ### 3.1.3 Repunit Method Before introducing Repunit Method we define **Admissible Repunit**. For a composite number \(n = n_1 n_2 \ldots n_k\) admissible repunit of \(n\) is a repunit \(R_i\) which \(R_i\) is related repunit of \(n_t\) for some \(1 \leq t \leq k\). Theorem 3.5 shows an important property of admissible repunits. **Theorem 3.5** Suppose \(R_i\) as an admissible repunit of \(n\). \(GCD(R_i, n)\) is a factor of \(n\). **Proof.** Consider \(n = n_1 n_2 \ldots n_k\), and due to definition of admissible repunits, there exits a number \(t\) which \(n_t \mid R_i\). So \(GCD(R_i, n)\) is a factor of \(n_t\). Let’s take a look back at the set introduced at Corollary 3.2. Repuniset have some great properties we which categorize them as follows: - **This set contains all prime numbers within it. There is no need to generate them first.** As we mention at proposition 2.1 it would be better to have an *implicit list of primes* to improve the factorization algorithm. Here it is! Instead of creating a list of primes which is limited and time consuming, we have a set contains all primes and no need to initiate or generate it. Algorithm 3.3 shows a method to factorize a number using benefits of repunits. In this algorithm we use \(GCD\) instead of division. In fact \(GCD\) is a great function. When \(GCD(n, R_i) = 1\) for some \(i\), we understand that number \(n\) has not any factor of \(R_i\). In this way we use on \(GCD\) instead of doing lots of divisions(for big numbers of course). Algorithm 3.3 uses the concept of **admissible repunits** implicitly. In fact it looks up for the lowest admissible repunit of \(n\). But we can ignore one limitation of this algorithm to achieve more performance. Why looking for lowest admissible repunit? **Corollary 3.6** One important feature of this algorithm is that we can surpass our algorithm starting from \(k\) instead of 2, and having nothing lost yet. This is one benefit of using Repuniset. For example we know RSA key designers do not use little primes as divisors, so why we should start checking from 2. we can start checking Algorithm 3.3 Factorizing numbers using repuniset as an implicit list of primes. This algorithm returns when the first admissible repunit of numbers found. from $R_i$ which $i$ can be selected appropriately to the problem situations. Now we want concentrate on an specific kind of numbers, big numbers which used in ciphering algorithms like RSA. There is an important fact about these numbers: These numbers contains two big primes for more security. The reason is that the security of cipher depends on factorizing difficulty which is bound to lowest prime used in the number. So the cipher developers try to use big primes with nearly same size. This lead us that prime numbers are near to: $$\sqrt[n]{\text{BigNumber}}$$ $n$ is the number of constructor primes of big number. In this example we used $n = 2$. We present algorithm 3.4 based on this idea. Algorithm 3.4 This algorithm works better when RSA assumptions are considered about $n$. In this case we can restrict boundaries to look up for admissible repunit of $n$ - Storing repuniset consumes considerably low memory. Consider you want to store all prime factors of a repunit. There is no need to save all repunit number. To store $R_n$ you should just store its length $n$. For example instead of storing: 11111: 41, 271. 111111: 3, 7, 11, 13, 37. 1111111: 239, 4649. we can use their abbreviated form: 5: 41, 271. 6: 3, 7, 11, 13, 37. 7: 239, 4649. In fact space complexity of storing $R_n$ is $O(n)$ while it was $O(10^n)$ before. 3.2 Special Algorithms for Repunits We see the usage of abbreviated forms to maximize memory performance. But it is useless in real calculation unless we have appropriate algorithms. Consider GCD calculation presented at algorithm 2.2. If one parameter is a big abbreviated repunit, we should construct it first, then start to divide. This sounds really bad for big repunits. Here we present some useful algorithms for repunits which use the abbreviated form for calculations without need of constructing them. Remember the algorithm 2.2 for calculating GCD. It was good and fast. But consider finding GCD of a $R_n$ and a number $n$. If we use Euclid’s algorithm we can not use abbreviated form of repunits. It is obvious that the only problem is the first division and finding first remainder. After the first division and finding first remainder the algorithm can be done in usual manner. But for the first division we need to calculate $R_n$ and use it to find the remainder which can be impossible due to memory limitations. Algorithm 3.5 calculates the remainder using a Divide & Conquer method. Theorem 3.7 The complexity of algorithm 3.5 is $O(\log n)$. Proof. Let’s compute $R_n$ modulo $m$ considering two cases, first assume ($n = 2k$): $$R_n = R_{\frac{n}{2}} \times 10^{\frac{n}{2}} + R_{\frac{n}{2}}$$ $$= R_{\frac{n}{2}} \times (10^{\frac{n}{2}} + 1)$$ $$R_n \mod m = (R_{\frac{n}{2}} \mod m) \times ((10^{\frac{n}{2}} + 1) \mod m)$$ $$R_n \mod m = (R_{\frac{n}{2}} \mod m) \times (((10^{\frac{n}{2}} \mod m) + 1) \mod m)$$ This shows what happened at first part of RepunitRemainder function. Now consider if $n = 2k + 1$: In this case we Algorithm 3.5 A useful algorithm to do the first step of Euclid’s algorithm. modify problem to use former case in a recursion manner. We know that $R_{n-1}$ satisfies the former case conditions. $$R_n = (10 \times R_{n-1})/10$$ $$R_n \mod m = 10 \times (R_{n-1} \mod m) + 1$$ In this way we show what happens at the latter case at RepunitRemainder function. The proof for ExpRemainder is simply like this. The complexity of this algorithm is $O(\log n)$ because it is a Divide and Conqueror method which divides each problem into 2 problems with size of $\frac{n}{2}$. Proposition 3.8 We can apply Memoization [4] on algorithm 3.5 to construct an algorithm based on Dynamic Programming which could run faster on the situation. 4 Conclusion We complete the paper arguing about the achievements we reached. Then comparison of our work with the others help you to sense its progress. As an ending we suggest some topics for future research. 4.1 Achievements In this paper we argue about repunits and their important relation with prime numbers. This relationship holds at theorem 3.1. After introducing this relevancy we declared a set called Repunitset which was very useful. We validated the fact that this set consists of all primes, so we used it as an implicit prime list to factorize numbers. Also we noted two important property of this set: 1. We can store it on computer using its abbreviated form to decrease space complexity. 2. There is no need to initialize the set, it is already filled. We then, continued by introducing Related Repunit and Admissible Repunit. As a matter of fact we found Admissible Repunit definition very capable to use in factorization algorithms. The Repunit Method used admissible repunits to find factors of big numbers. However we presented an algorithm to factorize numbers in a way that each step (each GCD) we throw lots of possible prime factors out. This gives us the power which we can start our algorithm with a big offset having nothing lost. In Repunit method does not work efficient on small numbers. In fact it is developed specially for big numbers which are equivalent to RSA public keys. 4.2 Future Topics of Research As an ending of our paper we want to make it endless! Here’s some topics which we consider you can take it and work on it after reading this paper. We will be happy if you let us know if you do so. - Arguing about algorithms for factorizing repunits. - Implementation of algorithm using powerful hardware and optimized codes. - Making guesses about RelatedRepunit(n) and also Admissible Repunit boundaries can improve performance of factorization algorithm as we do a little at algorithm 3.4. References [1] H. D. Chris K. Caldwell. Unique-period primes. *Recreational Maths*, 1(29):43–48, 1998. [2] D. Knuth. *The Art of Computer Programming*, volume 2. Addison-Wesley, 3rd edition. [3] A. Slinko. Absolute primes. [4] R. L. R. C. S. Thomas H. Cormen, Charles E. Leiserson. *Introduction to Algorithms*. McGraw Hill, 2nd edition, 2001. [5] S. Yates. Periods of unique primes. 53(5):314, 1980. [6] S. Yates. *Repunits and Repetends*. Star Publishing Co., Inc., Boynton Beach, Florida, 1982.
Photostability of electro-optic polymers possessing chromophores with very efficient amino donors and cyano-containing acceptors A. Galvan-Gonzalez, G.I. Stegeman, A.K-Y. Jen, X. Wu, Michael Canva, A.C. Kowalczyk, X. Zhang, H.S. Lackritz, S. Marder, S. Thayumanavan, et al. To cite this version: A. Galvan-Gonzalez, G.I. Stegeman, A.K-Y. Jen, X. Wu, Michael Canva, et al.. Photostability of electro-optic polymers possessing chromophores with very efficient amino donors and cyano-containing acceptors. Journal of the Optical Society of America B, Optical Society of America, 2001, 18 (12), pp.1846-1853. hal-00665583 Photostability of electro-optic polymers possessing chromophores with efficient amino donors and cyano-containing acceptors A. Galvan-Gonzalez and G. I. Stegeman School of Optics and Center for Research and Education in Optics and Lasers, University of Central Florida, Orlando, Florida 32826 A. K-Y. Jen and X. Wu Dept. of Materials Science & Engineering, University of Washington, Box 352120, Seattle, Washington 98195-2120 M. Canva Laboratoire Charles Fabry de l’Institut d’Optique, Institut d’ Optique Theorique et Appliquee—Centre National de la Recherche Scientifique Unité-Mixte de Recherche 8501 Université d’Orsay-Paris XI, 91403 Orsay Cedex France A. C. Kowalczyk, X. Q. Zhang, and H. S. Lackritz* Gemfire Corporation, Palo Alto, California 94303 S. Marder, S. Thayumanavan, and G. Levina Department of Chemistry, University of Arizona, Tucson, Arizona 85721 Received January 9, 2001; revised manuscript received May 23, 2001 The photostability of various electro-optic active guest–host polymers, doped with chromophores that possess very efficient cyano-containing acceptors and dialkyamino- or diarylamino-benzenes, and also their extended thiophene analogs as bridging structures, has been investigated over a broad wavelength range in the near infrared and the visible. A variation of over 2 orders of magnitude was found in the probability that an absorbed photon will lead to a photodegraded chromophore. The most photostable chromophore contained a tri-cyanovinyl acceptor and a diarylaminobenzene bridge unit. OCIS codes: 160.4330, 160.2100, 190.4400, 260.5130. 1. INTRODUCTION Polymers have been shown to have a promising future in photonics.\textsuperscript{1–5} For example, in electro-optics applications modulation with $>100$ GHz bandwidth has been demonstrated, and modulation with half-wave voltages approaching 1 V has been reported.\textsuperscript{1,5} The origin of the large, fast nonlinearities used is the tailored and designed chromophores that are dispersed and oriented in a polymer host. Usually these chromophores are chemically bonded to the polymer backbone for enhanced concentration and orientational stability.\textsuperscript{6,7} The chromophore structures consist of an electron-donor and an electron-acceptor group at opposite ends of the molecule, separated by an electron transporting bridge structure that facilitates electron delocalization. This structure typically results in the formation of a strong charge-transfer state characterized by a large permanent molecular dipole moment, a large transition moment for excitation by incident light to the first excited state, and a large first hyperpolarizability. Usually, the stronger the charge-transfer state, the larger the shift of the charge-transfer peak toward the infrared, and usually the larger the second-order nonlinearity one can expect after efficiently orienting the chromophores. The design of such molecules to optimize various important parameters has been an important task for the last decade.\textsuperscript{6,7} For poled polymers the initial emphasis has been on large dipole moments and first-order hyperpolarizability because both were needed for making media with large macroscopic second-order nonlinearities.\textsuperscript{6,7} Other important criteria have been the stability of the molecular alignment against temperature, and chemical stability, etc.\textsuperscript{8–10} This has led to the development of state-of-the-art polymers with both large nonlinearities and impressive thermal and chemical stability. Some of the most promising polymers developed to date contain cyano groups and aniline bridge structures.\textsuperscript{11–13} Recently, yet another important parameter has been added to the requirement list; namely, photochemical stability.\textsuperscript{14–17} The absorption of photons by molecules under illumination for long periods of time leads to changes of their chemical structure that cause them to lose their nonlinearity. This effect is referred to as photodegradation. | Symbol | Chromophore | $\lambda_{\text{max}}$ (nm) | $B$ (543 nm, 2.28 eV) | $B$ (633 nm, 1.96 eV) | |--------|-------------|-----------------|-----------------|-----------------| | 1 ◊ | ![Chromophore](image1.png) | 575 | $3 \times 10^6$ | $8 \times 10^6$ | | 2 ∇ | ![Chromophore](image2.png) | 595 | $5 \times 10^6$ | $8 \times 10^6$ | | 3 Δ | ![Chromophore](image3.png) | 600 | $1 \times 10^6$ | $2 \times 10^6$ | | 4 × | ![Chromophore](image4.png) | 520 | $1 \times 10^7$ | $1 \times 10^8$ | | 5 ○ | ![Chromophore](image5.png) | 525 | $2 \times 10^7$ | $1 \times 10^8$ | | 6 □ | ![Chromophore](image6.png) | 635 | $4 \times 10^6$ | $5 \times 10^6$ | | 7 + | ![Chromophore](image7.png) | 680 | $5 \times 10^6$ | $7 \times 10^6$ | | 8 ● | ![Chromophore](image8.png) | 510 | $6 \times 10^6$ | $2 \times 10^6$ | *Table continues* Table 1. (Continued) | Symbol\(^a\) | Chromophore | \(\lambda_{\text{max}}\) (nm) | \(B\) (543 nm, 2.28 eV) | \(B\) (633 nm, 1.96 eV) | |--------------|-------------|-----------------|-----------------|-----------------| | 9 ▲ | ![Chromophore](image) | 505 | \(6 \times 10^8\) | \(1 \times 10^8\) | \(^a\)Symbols and numbers used in figures. The photodegradation of various chromophore families has been reported recently, including stilbenes and azobenzenes.\(^{14–24}\) For example it has been shown that stilbene-based chromophores degrade very quickly because of the attack on the central stilbene bridge carbon bond by oxygen in the presence of light absorption in the main absorption band.\(^{19,21,22}\) Azobenzenes are more stable, especially when antioxidant groups are incorporated into the chromophore structure.\(^{20,22–24}\) The systematic wavelength dependence of photodegradation has been identified, and the range of stability for a large spectrum of bridge structures and donor and acceptor groups has been probed. In this paper a hitherto unexplored class of chromophores based on dialkyaminobenzenes or diarylamino-benzenes, and also their extended thiophene analogs as bridging structures, is examined for photostability. Because of their key role in producing strong charger transfer states, the current studies are focused on electron acceptors based on cyano-containing groups, such as the dicyanovinyl, tricyanovinyl, and tetracyanobutadienyl.\(^{11–13}\) In fact, some of the compounds studied here have been determined to have not only very large nonlinearities but also high thermal stability. For nine such chromophores the figure of merit (POM) for photostability was measured as a function of wavelength in the near infrared up to 1.3 \(\mu m\). 2. EXPERIMENTAL DETAILS The synthesis and characterization of the chromophores investigated here have been discussed in the literature.\(^{11–13}\) The nine chromophores investigated are listed in Table 1 along with the location of their absorption maxima. (The individual absorption spectra are shown in Figs. 2, 4, and 6.) All spectra were recorded with a two-way apparatus using an identical substrate and an undoped polymer cover-layer film of similar thickness as a reference. These chromophores were incorporated as guest molecules in a poly(methyl methacrylate) polymer host matrix with a typical 5% weight loading. The polymers were dissolved in cyclopentanone and then spin coated onto the substrate (fused silica) with a standard photore sist spinner. The films were a few micrometers thick. Note that, to provide a uniform and reproducible illumination on the polymer film, the opposite side of each substrate was coated with a thin aluminum film into which round holes 50–250 \(\mu m\) in diameter were fabricated photolithographically, prior to the polymer spin-coating procedure. (Illumination was then performed always onto the aluminum side, through the hole, the substrate, and then the doped polymer.) The experiments are based on the following model for the photodegradation process.\(^{25}\) Assuming a single dominant excited charge-transfer state with a well-defined absorption spectrum, the absorption of a photon raises the molecule from the ground state to this excited state. Normally the deexcitation returns the molecule to its ground state. However, a small fraction of the excited molecules undergoes a geometrical or chemical change and returns to a different ground state, one in which the electro-optical activity is greatly reduced or even zero. As a result the concentration of the electro-optic active molecules is decreased, and the magnitude of the corresponding charge-transfer absorption line is also decreased. By measuring the absorption in the tail of this spectral line, where it is assumed that the photoproduct absorption may be neglected, the rate of loss of the electro-optic active species can be measured. Whether a single excited state dominates the process and whether there is a single photodegradation channel, i.e., a single (dominant) final state, can be tested by monitoring the absorption spectrum as a function of illumination time. Previous studies on stilbenes and azobenzenes have shown multiple contributing excited states and multiple decay channels.\(^{19–21}\) However, in the present collection of chromophores, there are several in which the charge-transfer absorption band is well separated spectrally from other absorption features (that are usually located in the blue to near-UV region). One example is the chromophore 3, \(N,N\)-di(4-butylamino/phenyl-thiophene-5[1,3]1,4-bis-dicyano-3-phenyl]butyldiene] in Table 1. Its absorption spectrum (Fig. 1) was measured at different time intervals when the film was illuminated with red light at \(\lambda = 633\) nm in air. Note that initially all of the spectra pass through essentially a single point called an isobestic point. This usually implies that there is a transfer of oscillator strength from a single line in the visible to single or to multiple absorption lines deep in the UV. Since no additional peaks are observed for early times, the photodegradation products are likely to be short molecules whose excitation spectrum lies below 400 nm. This implies that there is only one charge-transfer state participating in the degradation process, similar to the case found previously in azobenzenes under a nitrogen atmosphere.\(^{20,23}\) This is contrary to the previous cases in which no isobestic point was observed, when there were multiple decay pathways with different decay times. One should then conclude that in the present case, there is only one pathway for significant degradation. Therefore, the photodegradation of chromophores with charge-transfer bands well separated from the UV region, like many of those studied here, could involve only a single excited state in the early stages. However, a new spectral peak does begin to grow at $\sim 475$ nm that eventually shifts the isobestic point to shorter wavelengths for times in excess of 90 min. This 475-nm spectral line takes only a small fraction of the initial oscillator strength associated with the charge-transfer state. The wavelength-dependent photostability measurements are relatively straightforward and have been described in detail elsewhere.\textsuperscript{20,23} Radiation of a specific wavelength, ranging from 450 to 1064 nm (and in some cases to 1320 nm), irradiates a well-defined region of a thin film of the polymer-chromophore sample. (We used a number of different lasers for that task including He–Ne at 633 and 544 nm, pulsed Nd:YAG at 1060 and 1320 nm, Ti–sapphire in the 750–900-nm range, and Ar and He–Cd for the shorter wavelengths from 530 down to 450 nm. Data at each wavelength were taken in the linear regime, in conditions where the observed changes were linear with respect to the photon flux used.) The change in the transmission in the tail of the dominant charge-transfer band was measured as a function of time. The initial slope in the film transmission versus time (i.e., versus the integrated photon flux) directly gives the photostability FOM $B/\sigma$. Here $B$ is the number of absorption events needed, on average, to photodegrade a single chromophore molecule, and $\sigma$ is the molecular absorptivity at the wavelength $\lambda$ of the incident radiation. The effective lifetime of the chromophores is then $\tau = B/\sigma n$, where $n$ is the photon flux. ### 3. WAVELENGTH DEPENDENCE OF THE FIGURE OF MERIT We choose to separate the chromophores that we have studied into three groups. Chromophores 1–3. Each have a tetracyanobutadienyl group in the acceptor and a benzenethiophene bridge between the donor and acceptor. Chromophores 4–7. Each have a tricyanovinyl group as the acceptor and a benzene as part of the bridging structure. Finally, two miscellaneous chromophores, 8 and 9. Both have a dicyanovinyl group at the acceptor end and either a polyene or a fused polyene as the bridge. In each case the order was chosen for increasing wavelength in the charge-transfer state, which is usually an indicator of increasing electro-optic activity. #### A. Chromophores with the Tetracyanobutadienyl Acceptor Group For the three chromophores containing the tetracyanobutadienyl group at the acceptor end, the linear absorption spectra, shown in Fig. 2, are similar. They all peak between 570 and 600 nm. The spectrum of compound 1, with a diphenylamino group as the donor, is marginally broader than 2 and 3, and the absorption tail at long wavelengths is also more pronounced for 1. The plots of the measured FOM $B/\sigma$ are shown in Fig. 3. The variation with wavelength is typical of that found previously for other electro-optic chromophores.\textsuperscript{20–24} The FOM rises from its minimum value at the peak of the absorption toward both the infrared and UV regions of the spectrum. It has been shown for azobenzenes and stilbenes that this increase in $B/\sigma$ toward the near IR is due to the variation in $\sigma$ with wavelength and that $B$ is, in comparison, approximately a constant.\textsuperscript{22,23} This leads to the minimum in $B/\sigma$ at the absorption peak and the dramatic increase with increasing wavelength. At wavelengths in the vicinity of 1300 nm this trend is no longer observed, and $B/\sigma$ levels off and in some cases has been found to decrease relative to its 1064-nm value.\textsuperscript{22,23} This behavior, which has been attributed to the generation of singlet oxygen, was also found in this study. The value of the molecular absorptivity was estimated from the measured absorption spectra and from the concentration of guest chromophores. The values of $B$ deduced from $B/\sigma$ and $\sigma$ are listed in Table 1 at two neighboring wavelengths: 544 nm (2.28 eV) and 633 nm (1.96 eV). The estimated uncertainty is a factor of two. The compound with diethylamino as the donor group consistently exhibits the highest FOM. Note, however, from Table 1 that chromophore 2 has the same value of $B$ as 1, despite having a smaller $B/\sigma$: This is a consequence of the smaller absorptivity per molecule of compound 2, as indicated by the relative absorption spectra in Fig. 2. This reinforces the conclusion that it is both $B$ and $\sigma$ that are important for long lifetimes against photodegradation. The third compound, with $N,N$-dibutylphenylamino donor groups, is the least photostable. This result was unexpected and shows the complexity of this photostability case. This instability may be due to the benzyl hydrogen atoms on the donor group that are sensitive to light illumination and tend to generate reactive H radicals to degrade the nonlinear optical chromophores. It is interesting to compare the values of $B$ obtained at 544 and 633 nm. Despite the uncertainty in this parameter of a factor of two, nevertheless there is a consistent trend that the value of $B$ closer to the UV (544 nm) is always smaller than that closer to 633 nm, which indicates that there may be additional contributions from degradation processes at lower wavelengths (higher energies). Note that for compound 3, for which the absorption spectrum was measured as a function of illumination time, the two values of $B$ obtained at 544 and 633 nm agree to within the experimental uncertainty, as expected from those measurements obtained for 1 and 2. **B. Chromophores with the Tricyanovinyl Acceptor Group** The chromophores containing the tricyanovinyl group are separated into two groups, the first containing just an aminobenzene bridge (4 and 5), and the others with a longer benzenethiophene stilbene bridge (6 and 7). The $N,N$-diphenylaminobenzene and $N,N$ dibutylaminobenzene 4-tricyanovinyl compounds 4 and 5 are among the most stable studied to date. (The only exception found previously was the azobenzene Disperse Red 1, with a methacrylate antioxidant group grafted on, which acts as an antioxidant.\textsuperscript{24}) Both compounds have almost identical absorption spectra (Fig. 4) and a similar variation in the FOM versus wavelength (Fig. 5). The long wavelength value of $B$ is more than an order of magnitude larger than any of the other chromophores reported here. Note that the value of $B$ in both cases increased dramatically from 544 to 633 nm, which indicates that multiple excited states, presumably located at higher energies, are involved in the process. The large increase in $B$ at 633 nm implies that the strong charge-transfer excited states at 520 nm are the more stable.\textsuperscript{19,21} This coupled with the narrow spectral width of the absorption spectra (Fig. 4) is a good prognosis for excellent photostability in the communications bands if no additional effects occur at still longer wavelengths. The rest of the tricyanovinyl group consists of compounds 6 and 7. They have more extensive conjugated electron pathways between the donor and acceptor groups that result in absorption peaks moved toward the IR relative to compounds 4 and 5. Their absorption peaks are at 620 and 680 nm, respectively (see Fig. 4). As shown in Fig. 5, the FOM are among the smallest discussed here. Note that both bridge structures, i.e., for 6 and 7, contain a stilbenelike carbon double bond that was found in the classical stilbene chromophore DANS to be the primary cause for the lack of photostability in the presence of oxygen.\textsuperscript{19,21,22} In fact the $B$ values are similar for 6 and 7, and comparable to the values observed in other has a much more rigid structure, and 9 would have been expected to be much less photostable. 4. DISCUSSION It has recently been shown that $\sigma$ in the long wavelength tail of the absorption line for many chromophores varies as $\exp[-(E_{\text{phot}} - hc/\lambda_{\text{max}})/E_0]$, where $\lambda_{\text{max}}$ is the wavelength at which the absorption due to the charge-transfer state peaks.\textsuperscript{22,23,26} Here $E_0$ is a constant that varies with both the host polymer and the chromophore, and the smaller the $E_0$, the spectrally narrower is the tail of the absorption spectrum. That is, $E_0$ is a broadening term that influences how far out into the IR the serious problem of degradation may affect. This behavior typically reflects the degree of inhomogeneous broadening of the system. Therefore, the FOM will have the form $$B/\sigma = B/\sigma_0 \exp[(E_{\text{phot}} - hc/\lambda_{\text{max}})/E_0]$$ $$= D_0 \exp(E_{\text{phot}}/E_0).$$ Here $D_0$ is the lifetime of the chromophore per unit of photon flux. It is believed to depend primarily on the details of the molecular structure, in contrast to $E_0$, which is related to the inhomogeneous broadening of the absorption line and which depends on the interaction of the chromophore with the polymer host. Under these assumptions the parameters $D_0$ and $E_0$ should be independent of the wavelength, and it is convenient to characterize the wavelength dependence of $B/\sigma$ in the near IR by these two parameters. The variation of $B/\sigma$ of a representative sampling (4, 6, and 9) of the guest–host polymers studied here in their long wavelength tail is shown in Fig. 8. Clearly this equation is well satisfied in these cases and in fact is a useful approximation in all of the cases studied. The results are summarized in a plot of $D_0$ versus $E_0$ in Fig. 9. As was noted before for azobenzenes, there is a clustering of $E_0$ around 0.1 eV ($E_0 \approx 0.1 \pm 0.01$ eV) that implies that the inhomogeneous broadening is comparable in all of these polymers in poly(methyl C. Chromophores with Dicyanovinyl Acceptor Groups For the last two cases, 8 and 9, the electron acceptor consists of a dicyanovinyl group. However, that is the only thing these two chromophores have in common. Nevertheless, although their bridge structures and their donors are quite different, many of their optical properties are similar due to similar donor–acceptor and effective conjugation lengths. For example, their absorption spectra are quite similar (Fig. 6). This is also the case for their photodegradation FOM (Fig. 7) and their value of $B$. This is perhaps surprising because compound 9 has multiple carbon double bonds along its backbone, whereas 8 (methyl methacrylate) polymer matrix was investigated in the visible and near-IR regions of the spectrum. The focus here was on compounds that contained benzene bridging structures and cyano-containing electron-acceptor groups. The general wavelength dependence of the FOM $B/\sigma$ established previously for other electro-optic active polymers was also observed here: Namely, the $B/\sigma$ is a minimum at the peak of the charge-transfer absorption band and then increases exponentially with decreasing photon energy in the near IR. As observed previously in other polymers, at 1320 nm this trend no longer holds, and $B/\sigma$ is reduced, probably due to the generation of singlet oxygen. Clearly data at longer wavelengths, in the main telecommunication optical band around 1550 nm, are needed, but the need for longer laser illumination times has prohibited experimental access to such data until now. The most photostable polymers are characterized by benzene bridges and tricyanovinyl electron-acceptor groups. For these cases, only about 10 out of a milliard photo-excitations of the charge-transfer state lead to degradation of the electro-optic active chromophore. More complex bridges between the electron-donor and -acceptor groups in general resulted in reduced photostability. It did not appear that the number of cyano moieties making up the electron acceptor affected significantly the photostability. **ACKNOWLEDGMENTS** This research was supported at the Center for Research and Education in Optics and Lasers by a Grant Opportunities for Academic Liaison with Industry program of the National Science Foundation, by Ballistic Missile Defense Organization at Gemfire, and by U.S. Air Force Office of Scientific Research (F49620-97-1-0240) at the University of Washington. The Center for Research and Education in Optics and Lasers and the Institut d’Optique Théorique et Appliquee also acknowledge bilateral French/U.S. Centre National de la Recherche Scientifique/National Science Foundation collaboration and support. *Present address, Aclara BioSciences, Inc., 1288 Pear Avenue, Mountain View, Calif. 94043. **REFERENCES** 1. R. A. Hill, S. Dreher, A. Knoesen, and D. R. Yankelevich, “Reversible optical storage utilizing pulsed, photoinduced, electric-field-assisted reorientation of azobenzenes,” *Appl. Phys. Lett.* **66**, 2156–2158 (1995). 2. D. Chen, H. R. Fetterman, A. Chen, W. H. Steier, L. R. Dalton, W. Wang, and Y. Shi, “Demonstration of 110 GHz electro-optic polymer modulators,” *Appl. Phys. Lett.* **70**, 3335–3337 (1997). 3. For example, A. Grunnet-Jepsen, C. L. Thompson, R. J. Twieg, and W. E. Moerner, “High performance photorefractive polymer with improved stability,” *Appl. Phys. Lett.* **70**, 1515–1517 (1997). 4. For example, G. Gu, D. Z. Garbuzov, P. E. Burrows, S. Venkatesh, S. R. Forrest, and M. E. Thompson, “High-external-quantum-efficiency organic light-emitting devices,” *Opt. Lett.* **22**, 396–399 (1997). 5. Y. Shi, C. Zhang, H. Zhang, J. Betchel, L. Dalton, B. Robinson, and W. Steier, “Low (sub 1 volt) halfwave voltage polymeric electro-optic modulator achieved by controlling chromophore shape,” Science 288, 119–122 (2000). 6. I. Ledoux, J. Zysss, E. Barni, C. Barolo, N. Diulgheroff, P. Quagliotto, and G. Viscardi, “Properties of novel azoxyes containing powerful acceptor groups and thiophene moiety,” Synth. Met. 115, 213–217 (2000). 7. A. K. Y. Jen, Y. Liu, L. Zheng, S. Liu, K. J. Drost, Y. Zhang, and L. Dalton, “Synthesis and characterization of highly efficient, chemically and thermally stable chromophores with chromone-containing electron acceptors for NLO applications,” Adv. Mater. 11, 452–455 (1999). 8. For example, D. H. Choi, J. H. Park, N. Kim, and S.-D. Lee, “Improved temporal stability of the second-order nonlinear optical effect in a sol-gel matrix bearing an active chromophore,” Chem. Mater. 10, 705–709 (1998). 9. M. Stahelin, C. A. Walsh, D. M. Burland, R. D. Miller, R. J. Twieg, and W. Volksen, “Orientational decay in poled polymer second-order nonlinear optical guest-host polymers: temperature dependence and effects of poling geometry,” J. Appl. Phys. 73, 8471–8479 (1993). 10. For example, R. J. Twieg, D. M. Burland, J. L. Hedrick, V. Y. Lee, R. D. Miller, C. R. Moylan, W. Volksen, and C. A. Walsh, “Progress on nonlinear optical chromophores and polymers with useful nonlinearity and thermal stability,” Mater. Res. Soc. Symp. Proc. 328, 421–431 (1994). 11. Y. M. Cai and A. K-Y. Jen, “Thermally stable poled polyquinoline thin film with very large electro-optic response,” Appl. Phys. Lett. 117, 299–301 (1995). 12. H. Ma, J. Y. Wu, P. Herguth, B. Q. Chen, and A. K.-Y. Jen, “A novel class of high-performance perfluorocyclobutane-containing polymers for second-order nonlinear optics,” Chem. Mater. 12, 1187–1189 (2000). 13. X. M. Wu, J. Y. Wu, Y. Q. Liu, and A. K-Y. Jen, “Facile approach to nonlinear optical side-chain aromatic polyimides with large second-order nonlinearity and thermal stability,” J. Am. Chem. Soc. 121, 472–473 (1999). 14. M. A. Mortazavi, H. N. Yoon, and C. C. Teng, “Optical power handling properties of polymeric nonlinear optical waveguides,” J. Appl. Phys. 74, 4871–4873 (1993). 15. M. Mortazavi, K. Song, H. Yoon, and McCulloh, “Optical power handling of nonlinear polymers,” Polymer Reprints 35, 198–199 (1994). 16. R. A. Norwood, D. R. Holcomb, and F. F. So, “Polymers for nonlinear optics: absorption, two photon absorption,” Nonlinear Opt. 6, 193–204 (1993). 17. M. Cha, W. E. Torruellas, G. I. Stegeman, W. H. G. Horsthuis, G. R. Mohlmann, and J. Meth, “Two photon absorption of DANS (Di-alkyl-amino-nitro-stilbene) side chain polymer,” Appl. Phys. Lett. 65, 2648–2650 (1994). 18. Ph. Pretre, E. Sidlick, A. Knoesen, D. J. Dyer, and R. J. Twieg, “Optical dispersion properties of tricyanovinylamine polymer films for ultrashort optical pulse diagnostics,” ACS Symp Ser 695, 328–341 (1996). 19. Q. Zhang, M. Canva, and G. Stegeman, “Wavelength dependence of 4-dimethylamino-4’nitrostilbene polymer thin film photodegradation,” Appl. Phys. Lett. 73, 912–914 (1998). 20. A. Galvan-Gonzalez, M. Canva, G. Stegeman, R. Twieg, T. Kowalczyk, and H. Lackritz, “Effect of temperature and atmospheric environment on the photodegradation of some Disperse Red 1-type polymers,” Opt. Lett. 24, 1741–1743 (1999). 21. A. Galvan-Gonzalez, M. Canva, and G. Stegeman, “Local and external factors affecting the photodegradation of 4,N,N’-dimethylamino-4’-nitrostilbene polymer films,” Appl. Phys. Lett. 75, 3306–3308 (1999). 22. A. Galvan-Gonzalez, M. Canva, G. Stegeman, R. Twieg, K. Chan, T. Kowalczyk, X. Zhang, H. Lackritz, S. Marder, and S. Thayumanavan, “Systematics behavior of electro-optic chromophore photostability,” Opt. Lett. 25, 332–334 (2000). 23. A. Galvan-Gonzalez, M. Canva, G. I. Stegeman, L. Sukhomlinova, R. J. Twieg, K.-P. Chan, T. C. Kowalczyk, and H. S. Lackritz, “Photodegradation of azobenzene nonlinear optical chromophores: the influence of structure and environment,” J. Opt. Soc. Am. B 17, 1992–2000 (2000). 24. A. Galvan-Gonzalez, K. D. Belfield, G. I. Stegeman, M. Canva, K.-P. Chan, K. Park, L. Sukhomlinova, and R. J. Twieg, “Photostability enhancement of an azobenzene photonic polymer,” Appl. Phys. Lett. 77, 2083–2085 (2000). 25. A. Dubois, M. Canva, A. Brun, F. Chaput, and J.-P. Boilot, “Photostability of dye molecules trapped in solid matrices,” Appl. Opt. 35, 3193 (1996). 26. A. C. Le Duff, V. Ricci, T. Pliska, M. Canva, G. Stegeman, K. Chan, and R. Twieg, “Importance of chromophore environment on the near-infrared absorption of polymeric waveguides,” Appl. Opt. 39, 947–953 (2000).
The crystal structure of eakerite, a calcium–tin silicate Anthony A. Kossiakoff\(^1\) Department of Chemistry, California Institute of Technology Pasadena, California 91109 and Peter B. Leavens Department of Geology, University of Delaware Newark, Delaware 19711 Abstract Eakerite, $\text{Ca}_2\text{SnAl}_2\text{Si}_6\text{O}_{18}(\text{OH})_2 \cdot 2\text{H}_2\text{O}$, contains crankshaft-like chains, similar to those in feldspars, of composition $\text{AlSi}_3\text{O}_9(\text{OH})$, which are cross-linked to form a kinked sheet. Al is ordered, and the OH is bonded to it. Ca and Sn ions lie in sheets between the kinked aluminosilicate network. The Ca ions are coordinated by 4O, 2OH, and 2H$_2$O in a square antiprism; these are edge-linked into chains which run across the aluminosilicate chains and which are cross-linked by Sn octahedra. The OH and H$_2$O are bonded to Ca and, by hydrogen bonds, to other O; this strong bonding prevented their being distinguished by thermogravimetric analysis. Introduction Eakerite (Leavens et al., 1970) is a rare tin silicate found in hydrothermal fissures in spodumene-bearing pegmatite at King's Mountain, North Carolina. The formula was given as $\text{Ca}_2\text{Al}_2\text{SnSi}_4\text{O}_{18}(\text{OH})_n$ on the basis of wet-chemical analysis of an 11 mg sample and from thermogravimetric analysis, which showed that all water is tightly bound. The structural analysis described in this paper shows that the chemical analysis is correct but that the formula should be written $\text{Ca}_2\text{SnAl}_2\text{Si}_6\text{O}_{18}(\text{OH})_2 \cdot 2\text{H}_2\text{O}$. Experimental Eakerite is monoclinic and crystallizes in space group $P2_1/a$; the cell parameters in Ångstroms are $a = 15.892(7)$; $b = 7.721(3)$; $c = 7.438(3)$; $\beta = 101.34^\circ(3)$; and the volume = 891.35 Å$^3$. The calculated density is 2.65 with $Z = 2$ molecules/unit cell (Leavens et al., 1970). Three-dimensional intensity data were collected using a four-circle Picker card-automated diffractometer with filtered (0.002" Zr foil) MoK$\alpha$ radiation. The crystal was ground to a 0.25 mm diameter sphere and mounted with the $b^*$ axis along the direction of the instrument. The linear absorption coefficient ($\mu\text{MoK}\alpha$) is 15.5 cm$^{-1}$, giving a transmission factor of 0.69 for a 0.25 mm sphere. The moving crystal, moving counter-measurement technique ($\theta - 2\theta$ coupling) was used. Integrated intensities were measured over a scan range taken 0.9° on both sides of the $K\alpha_1-K\alpha_2$ splitting at a rate of 2°/min. Individual background intensities were determined by 30-second stationary background counts taken on both sides of the peak. Three standard reflections were measured every 60 reflections to monitor crystal alignment and instrument stability. In all, 1791 independent reflections were measured, of which 1687 were considered statistically observable using the criterion $F_o \geq 3\sigma(F)$; $\sigma(F)$ was calculated from counting statistics and an instrumental instability constant of 2 percent. The raw intensity data of each reflection were corrected for background, Lorentz, and polarization effects. Absorption corrections were not necessary, due to the spherical shape of the crystal and its low linear absorption coefficient. Corrections for the effects of secondary extinction and anomalous dispersion were calculated to be small and were ignored. Solution of the structure The crystal structure of eakerite was determined using heavy-atom and vector superposition techThe two Sn atoms in the unit cell are necessarily located in a special position at the center of symmetry, chosen as 0, 0, 0 and \( \frac{1}{2}, \frac{1}{2}, 0 \). The vector peaks between Sn and the other atoms in the structure would, therefore, appear in two sets: (1) the actual locations where the atoms would be found in the real cell, and (2) a set displaced by \( \frac{1}{2}, \frac{1}{2}, 0 \). A three-dimensional sharpened Patterson, restricting \( \sin\theta/\lambda \) to 0.35, was calculated (all crystallographic programs used are from the XRAY '70 computing package of James Stewart). The largest Patterson peaks were situated along the y axis on sections having 1/6 unit cell separations. A survey of possible Sn–X (Ca, Si, Al) vectors was made, and the vector coordinates were used to calculate the Patterson positions of the \( X-X' \) vectors and their transformations. Unfortunately, the only set of prominent vectors which this procedure could clearly identify were the Ca–Ca vectors. Other vectors due to \( X-X' \) [Si–Si (Al, O)] interactions could not be identified unambiguously. The major problem faced in the early stages of refinement was that using phasing information from the Sn atoms located in the centered special position introduced a false mirror plane in all Fourier maps perpendicular to the y axis through the origin. The Ca atom was placed in the phasing model located in coordinate positions calculated from the Patterson map. In theory, addition of atoms in the phasing model located in general positions, and of consistent orientation, will break the image ambiguity and lead to the correct structure. The phasing contribution due to the Ca, however, was not large enough to discriminate clearly between images. Three of the four possible (Si, Al) atoms were chosen from consistent Patterson and Fourier information and were added to the phasing model. Even at this stage, the image problem was not completely resolved and led to doubt that a consistent set of atom positions had been chosen. Further refinement on this particular model using the heavy-atom approach was therefore halted. The position of the Sn atom at the origin and its exaggerated influence on the initial phasing models made the iterative process of improving the phases by stepwise addition of correct atoms to the structure less powerful than is normally observed in heavy-atom problems of this type. On the other hand, this specific symmetry of the Sn atom makes the structure a prime candidate for solution by vector superposition. The vector superposition was accomplished by overlaying two identical three-dimensional Patterson maps translated \( \left( \frac{1}{2}, \frac{1}{2}, 0 \right) \) from each other. Vector peak overlaps clearly identified the position of all atoms in the structure, with the exception of two oxygens. The orientations of all the atoms which had previously been placed in the phasing model were also shown to be correct. A cycle of least-squares refinement was run varying positional parameters, keeping temperature factors constant, giving a residual of \( R = 0.32 \). A difference map clearly showed the two remaining oxygen positions. With all the non-hydrogen atoms located, the \( R \) factor was still 0.30. Two cycles of least-squares refinement varying the positional parameters and the isotropic temperature factors of all the atoms led to a precipitous drop in \( R \) to 0.061 and \( wR = 0.072 \). These cycles were run using the weighting scheme \( 1/\sigma(F)^2 \), omitting reflections for which both \( F_o < 3\sigma(F) \) and \( F_c < F_o \). A cycle of least squares was run, changing from isotropic to anisotropic temperature factors, resulting in a fit with residuals \( R = 0.044 \) and \( wR = 0.059 \). This refinement showed that the atoms had very little anisotropic motion, as is to be expected for this type of compound. It was originally thought that there might be Si–Al disorder. This disorder would be accompanied by the presence of average Si–O, Al–O bond distances around the disordered sites. These were not found. Al is tetrahedrally coordinated to four oxygens (O3, O5, O7, O9), having mean Al–O bond distances of 1.75 Å. All the Si on the other hand have Si–O bond lengths of 1.60–1.64 Å, as expected. A difference map was made, and the three hydrogen atom locations were found. One of the hydrogens was bonded to oxygen O9, forming a hydroxyl, while the other two were bonded to O8, giving a water molecule. This configuration is consistent with charge considerations, since oxygen atoms bonded to silicon are not expected to have attached hydrogen atoms, and oxygens 8 and 9 are the only two oxygens in the structure which could meet this criterion. The theoretical hydrogen positions were calculated from the bonding geometry and placed 1.0 Å from the oxygens O8 and O9. These hydrogen positions were extremely close to those observed in the difference map. A final cycle of refinement, varying positional and anisotropic temperature factor parameters but holding hydrogen parameters invariant, gave residuals of \( R = 0.042 \) and \( wR = 0.055 \). A difference map showed no identified peaks having over one electron per Å\(^3\). Positional and anisotropic thermal parameters of the atoms, with their standard deviations, are given in Table 1, and the observed \((F_o)\) and calculated \((F_c)\) structure factors are given in Table 2.\(^2\) **Description of the structure** The eakerite structure is illustrated as a stereopair in Figure 1; Figures 2 and 3 show the structure projected perpendicular to the \(b\) and to the \(c\) axis. Bond lengths and angles are given in Tables 3 and 4. Eakerite is composed of irregular, kinked sheets of composition \(\text{AlSi}_5\text{O}_{16}(\text{OH})\) which are parallel to the \(a-b\) plane. The sheets are bonded together by interlayer Sn in 6-fold coordination and Ca in 8-fold coordination. There are four \(\text{H}_2\text{O}\) molecules per unit cell, each bonded to 2 Ca. The correct structural formula is \(\text{Ca}_2\text{SnAl}_2\text{Si}_4\text{O}_{16}(\text{OH})_2 \cdot 2\text{H}_2\text{O}\), with \(Z = 2\). The Al is fully ordered and is bonded to three O and one OH. Hydroxyl ions rarely are bonded into the tetrahedral network in silicates. Besides eakerite, juggedlite and other members of the pumpellyite group are examples of such bonding (Allmann and Donnay, 1973). In all of these, OH is bonded to Al rather than Si, and Al is in an ordered position in the network. **The aluminosilicate sheet** It is convenient to think of the aluminosilicate sheet as composed of crankshaft-like chains roughly parallel to \(a\). These chains, which can be seen particularly well in Figure 2, are very much like those in the feldspars, for example sanidine (Taylor, 1933; Deer et al., 1963). As in the feldspars, the chains in eakerite are cross-linked to form a series of roughly square rings. Here the resemblance ends. In the feldspars, pairs of chains form a continuous, discrete, kinked band complexly bonded to four other bands around it. In eakerite each chain is bonded to the chains on either side, with four successive bonds to alternate sides. This alternate linking results in a pattern between any two adjacent chains of three four-membered tetrahedral rings alternating with a ring of twelve tetrahedra. In eakerite each chain has an eight-tetrahedron repeat. The chains zig-zag back and forth, with segments of four tetrahedra alternately parallel [110] and [110] (Fig. 3). The Al tetrahedra are at the ends of these segments. Each segment of four tetrahedra is about 8.5 Å long, about the same length as the corresponding unit in the feldspars, but because of the zig-zag, the two-segment, eight-tetrahedra, repeat distance (and \(a\) axis) is only 15.89 Å. **The cation sheet** The Sn and Ca atoms lie in sheets almost exactly in the (001) plane, between the aluminosilicate sheets. In Figure 3 they seem to be in large holes formed by the 12-membered rings, but the kinking of the sheets makes these holes less coherent than they appear in --- \(^2\) For a copy of the structure factor data, Table 2, order Document AM-76-024 from the Business Office, Mineralogical Society of America, 1909 K Street, N.W. 20006. Please remit $1.00 in advance for the microfiche. that projection. Each of the cations does lie between two chains and is bonded to oxygens in those two chains only (and to H$_2$O molecules in the case of Ca). Sn is bonded to six unlinked oxygens in a nearly regular octahedron. Ca is in an irregular square antiprism composed of two OH (O9), two H$_2$O (O8), two unlinked O (O4, O6), and two O linking Al and Si (O3, O5). The Ca–O bond lengths vary by about 0.2 Å, and on the average the bond lengths to OH and H$_2$O are slightly longer than those to the oxygens. Figure 4 shows the polyhedral Ca–Sn sheet of eakerite; it is composed of chains of edge-sharing Ca antiprisms parallel to $b$, cross-linked by the Sn octahedra. The atoms comprising the shared edges of the antiprisms are OH or H$_2$O. The sheet contains large holes, surrounded by 6 Ca polyhedra and 2 of Sn; the axes of these holes are parallel to (210) and (210). The chains of Ca polyhedra are almost identical to those in herderite, CaBePO$_4$(OH,F) (Lager and Gibbs, 1974, Fig. 1b). In herderite the chains also are parallel to $b$, and the $b$ dimension of the two minerals is similar: eakerite 7.72Å, herderite 7.66Å. In herderite the chains are alternately cross-linked to each other. Fig. 2. The structure of eakerite, viewed down $b$. Numbers give the atomic coordinates, in percent, along $b$. Bonds within the tetrahedral network are indicated by solid lines; other bonds, including hydrogen bonds, by dotted lines. Hydrogen bonds to hydroxyl are not included because of drafting difficulties. Broken lines indicate bonds between atoms in adjacent unit cells. Fig. 3. The structure of cakerite, viewed down $c$. Numbers give atomic coordinates, in percent, along $c$. Bonds within the tetrahedral network are indicated by solid lines; other bonds, including hydrogen bonds, by dotted lines. to form a sheet; this sheet can be produced by removing the Sn octahedra from the cakerite sheet and by linking the Ca polyhedra directly to each other. Because of the intervening Sn octahedra, the holes in the cakerite sheet are larger than those in the herderite sheet. Both the folding of the tetrahedral sheet in cakerite and the open linking of the polyhedral sheet can be thought of as consequences of the high charge of the Sn ion. **Hydrogen bonds** Both the H$_2$O (O8) and the OH (O9) are clearly oversaturated. The method of Donnay and Allman (1970) gives H$_2$O (O8) an excess of 0.46 charge and OH (O9) an excess of 0.21 charge (Table 4). This oversaturation indicates that hydrogen bonds are present. The distance between H$_2$O (O8) and O7, which links Si and Al, is 2.79 Å, and that between H$_2$O, which links 2Si, is 2.76 Å (Figs. 2, 3), much closer than the normal minimum O–O distance in inorganic structures of 3.3 Å, and in the typical range for hydrogen bonds. Lippincott and Schroeder (1955) calculated the fractional bond valence of asymmetric, linear, hydrogen bonds as a function of the separation of the oxygen ions (cited in Donnay and Allman, 1970). The assumption that the bond is linear pro- --- **Table 3. Bond lengths of cakerite** | Atom | Distance (Å) | Atom | Distance (Å) | |------|--------------|------|--------------| | Sn–O(1) | 2.014(4) | Al–O(7) | 1.723(4) | | Sn–O(4) | 2.023(4) | Al–O(9) | 1.747(5) | | Sn–O(6) | 2.061(4) | Si(1)–O(1) | 1.599(4) | | Ca–O(3) | 2.421(5) | Si(1)–O(2) | 1.631(4) | | Ca–O(4) | 2.412(4) | Si(1)–O(5) | 1.614(5) | | Ca–O(5) | 2.511(4) | Si(1)–O(10) | 1.636(4) | | Ca–O(6) | 2.401(4) | Si(2)–O(3) | 1.620(5) | | Ca–O(8) | 2.623(4) | Si(2)–O(4) | 1.605(4) | | Ca–O(9) | 2.458(4) | Si(2)–O(7) | 1.603(4) | | Ca–O(8)' | 2.502(4) | Si(2)–O(11) | 1.634(6) | | Ca–O(9)' | 2.622(4) | Si(3)–O(2) | 1.634(6) | | Al–O(3) | 1.755(4) | Si(3)–O(6) | 1.606(4) | | Al–O(5) | 1.746(4) | Si(3)–O(10) | 1.610(5) | | | | Si(3)–O(11) | 1.627(5) | The standard deviation in the least significant figure(s) is given in parentheses. Fig. 4. The polyhedral Ca–Sn sheet in eakerite. Spots indicate Sn at unit cell corners. Table 4. Bond angles of eakerite | Atoms | Angle | Atoms | Angle | |----------------|---------|----------------|---------| | O(1)–Sn=O(4) | 89.3(1) | O(5)–Al–O(9) | 99.4(3) | | O(1)–Sn–O(6) | 89.7(2) | O(5)–Al–O(7) | 114.4(2)| | O(4)–Sn–O(6) | 84.4(2) | O(5)–Al–O(3) | 179.9(2)| | O(3)–Ca–O(1) | 108.7(2)| O(9)–Al–O(7) | 117.0(2)| | O(3)–Ca–O(6) | 105.3(1)| O(9)–Al–O(3) | 101.4(3)| | O(3)–Ca–O(3) | 73.4(2) | O(7)–Al–O(3) | 113.1(3)| | O(3)–Ca–O(4) | 136.2(2)| O(10)–Si(1)–O(2) | 107.3(3)| | O(3)–Ca–O(3)' | 81.7(2) | O(10)–Si(1)–O(5) | 108.1(3)| | O(3)–Ca–O(3)" | 64.9(1) | O(10)–Si(1)–O(1) | 110.7(3)| | O(3)–Ca–O(3)"' | 146.8(2)| O(2)–Si(1)–O(1) | 105.6(2)| | O(3)–Ca–O(6) | 85.2(3) | O(3)–Si(1)–O(1) | 115.7(2)| | O(3)–Ca–O(3) | 78.8(2) | O(3)–Si(2)–O(4) | 110.8(2)| | O(3)–Ca–O(9) | 64.9(1) | O(3)–Si(2)–O(7) | 108.1(3)| | O(3)–Ca–O(4) | 145.1(1)| O(3)–Si(2)–O(1) | 110.7(3)| | O(3)–Ca–O(3)' | 116.3(1)| O(4)–Si(2)–O(1) | 105.6(3)| | O(3)–Ca–O(3)" | 146.8(2)| O(4)–Si(2)–O(7) | 109.9(3)| | O(3)–Ca–O(3)"' | 78.8(2) | O(4)–Si(2)–O(1) | 110.7(2)| | O(3)–Ca–O(9) | 64.9(1) | O(7)–Si(2)–O(1) | 110.8(2)| | O(3)–Ca–O(4) | 145.1(1)| O(3)–Si(2)–O(3) | 108.1(3)| | O(3)–Ca–O(3)' | 116.3(1)| O(4)–Si(2)–O(3) | 110.7(3)| | O(3)–Ca–O(3)" | 146.8(2)| O(4)–Si(2)–O(7) | 109.9(3)| | O(3)–Ca–O(3)"' | 78.8(2) | O(7)–Si(2)–O(3) | 110.7(2)| | O(3)–Ca–O(9) | 64.9(1) | O(6)–Si(3)–O(10) | 109.9(2)| | O(3)–Ca–O(4) | 141.4(3)| O(6)–Si(3)–O(2) | 109.9(2)| | O(3)–Ca–O(3)' | 148.3(2)| O(6)–Si(3)–O(11) | 112.2(3)| | O(3)–Ca–O(3)" | 125.1(7)| O(10)–Si(3)–O(2) | 108.2(3)| | O(3)–Ca–O(3)"' | 121.4(1)| O(10)–Si(3)–O(11) | 109.4(2)| | O(9)–Ca–O(4) | 140.6(1)| O(2)–Si(3)–O(11) | 107.9(2)| | O(9)–Ca–O(3)' | 77.8(2) | O(6)–Si(3)–Si(2) | 91.8(2) | | O(9)–Ca–O(3)" | 72.4(1) | O(2)–Si(3)–Si(2) | 104.5(2)| | O(9)–Ca–O(3)"' | 140.6(1)| O(6)–Si(3)–Si(2) | 91.8(2) | Likewise, OH (O9) is 3.02 Å from O1, bonded to Si and Sn, and markedly undersaturated (Table 5). However, the large O9–O1 separation permits a charge transfer of only 0.096 charge, leaving OH (O9) oversaturated by 0.118 charge, and O1, undersaturated by 0.180 charge. The average compensated valence on oxygen ions in the eakerite structure as calculated by the method of Donnay and Allman (1970) is 2.001, in good agreement with the required 2, and suggesting that the local residual imbalances on O1, O8 (H$_2$O), and O9 (OH) are real. The strong bonding of the water molecules, both by bonds to Ca and hydrogen bonds to other oxygens, explains why water is held to such high temperatures when eakerite is heated (Leavens et al., 1970) and why the water and hydroxyl were not distinguished on the thermogravimetric curve of eakerite. Leavens et al., (1970) noted that the relative abundance of external forms on crystals of eakerite does Table 5. Valence bond strengths of the oxygen atoms of eakerite | | Sn | Ca | Ca' | Al | Si_1 | Si_2 | Si_3 | Σ V_i | |---|-----|-----|-----|-----|------|------|------|-------| | 1 | .69 | | | | 1.04 | | | 1.73 | | 2 | | | | | .97 | | | 1.94 | | 3 | | .28 | .73 | | 1.00 | | | 2.01 | | 4 | .68 | .28 | | | 1.03 | | | 1.99 | | 5 | | .24 | .75 | 1.00| | | | 1.99 | | 6 | .64 | .28 | | | | 1.02 | | 1.94 | | 7 | | | | | .78 | 1.03 | | 1.81 | | 8 (H_2O) | .21 | .25 | | | | | | .46 | | 9 (OH) | .21 | .26 | .74 | | | | | 1.21 | | 10 | | | | .97 | 1.01 | | | 1.98 | | 11 | | | | | .97 | .98 | | 1.95 | The standard deviation in the least significant figure(s) appear in parentheses. not conform to the law of Bravais as extended by Donnay and Harker (1937), since the forms {210} and {410} were more common and prominent than expected, and the forms {100} and {110} less so. Although the structure of eakerite contains two silicate chains per cell in the b axis direction, it has two such doubled elements in the a axis direction: the 4 tetrahedron sub-repeat in the silicate chains, and the two chains of Ca polyhedra per cell in the polyhedral sheets. These two features, along with the large holes oriented along [210] in the polyhedral sheets, seem adequate to account for the morphological anomalies of eakerite crystals, which require a pseudo-halving of the a axis. Acknowledgments We would like to thank Robert H. Wood and Daniel Appleman for their encouragement and advice. References Allman, Rudolf and Gabrielle Donnay (1973) The crystal structure of julgoldite. Mineral. Mag. 39, 271–281. Deer, W. A., R. A. Howie and J. Zussman (1963) Rock-Forming Minerals. Vol. 4, Framework Silicates. Longman, London. 435 p. Donnay, Gabrielle and Rudolf Allman (1970) How to recognize O²⁻, OH⁻, and H₂O in crystal structures determined by X-rays. Am. Mineral. 55, 1003–1015. Donnay, J. D. H. and David Harker (1937) A new law of crystal morphology extending the Law of Bravais. Am. Mineral. 22, 446–467. Lager, George A. and G. V. Gibbs (1974) A refinement of the crystal structure of herderite. Am. Mineral. 59, 919–925. Leavens, Peter B., John S. White, Jr. and Max H. Hey (1970) Eakerite, a new tin silicate. Mineral. Rec. 1, 92–96. Lippincott, E. R. and R. Schroeder (1955) One-dimensional model of the hydrogen bond. J. Chem. Phys. 23, 1099–1106. Manuscript received, October 6, 1974; accepted for publication, April 30, 1976.
Influence of AHP Methodology and Human Behaviour on e-Scouting Process Lucio Compagno, Diego D’urso, Antonio Latora, Natalia Trapani To cite this version: Lucio Compagno, Diego D’urso, Antonio Latora, Natalia Trapani. Influence of AHP Methodology and Human Behaviour on e-Scouting Process. Jan Frick; Bjørge Timenes Laugen. International Conference on Advances in Production Management Systems (APMS), Sep 2011, Stavanger, Norway. Springer, IFIP Advances in Information and Communication Technology, AICT-384, pp.514-525, 2012, Advances in Production Management Systems. Value Networks: Innovation, Technologies, and Management. <10.1007/978-3-642-33980-6_56>. <hal-01524190> Influence Of AHP Methodology And Human Behaviour On e-scouting Process Compagno Lucio, D’Urso Diego, Latora Antonio G., Trapani Natalia Department of Industrial Engineering University of Catania email@example.com, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org Abstract. The e-scouting process, to search for and select products whose characteristics are known in catalogues, is often inefficient and ineffective: the overload of information available on the Web and the human limitations in processing information, are the main cause. Experiments, in order to simulate the e-scouting process of a leverage [1] provided by a set of student buyers, were performed and results on effectiveness and efficiency of e-scouting process strategies and methods are collected and analysed. Referring to the strategic evaluation of the e-scouting process, results show that a Decision Support System (DSS) based on Analytic Hierarchy Process (AHP) methodology [2], supports the buyer - Human Decision Maker (HDM) to interpret in a coherent way the strategic guidelines previously set by the high level management. Regarding to method’s evaluation of e-scouting process, was appreciated that if quantitative product features are known and limited in a range of variation, the Human Decision Maker’s evaluation substantially coincide with the evaluation of a Virtual Decision Maker (VDM) based on Analytic Hierarchy Process. On the contrary, the difference among HDM and VDM is considerable when quantitative product characteristics are unknown or unlimited. The work carried out has shown that using a DSS based on AHP is always useful to improve efficiency and effectiveness on e-scouting process’ strategies however efficiency and effectiveness on e-scouting process’ method can be improved by DSS based on AHP only if the human evaluation about product features is limited. Keywords: AHP; e-procurement; supplier selection; managerial human behaviour. Purpose of the paper The contribution of I&CT as enabled the establishment of a global market characterized by: 1. proliferation of products, services and suppliers; 2. competition among products, services and suppliers; 3. high number of information shared about products, services and suppliers. In this scenario, the e-scouting process to search for and select products whose characteristics are known in catalogues, presents at the same time strength and weakness due to the great number of information available on the Web and the human limitations in processing this great number of information. Kraljic, who has transformed the primary purchasing in supply management, suggested strategies related to the supply market complexity and the purchasing importance. A direct consequence of the Kraljic matrix is the attention to products with high market complexity and high importance of supply at the expense of other products, consumer or business, easy to find on the market and/or not important in terms of value. In order to buying decisions on non-strategic items, in B2C environment can affect personal issues while in B2B is expected by the buyer a rational decision, result obtained by optimization of a multi-objective function can summarize all the features listed in the web catalogues. However, encoding and optimization of a multi-objective function with qualitative and quantitative variables are hard for a buyer - Human Decision Maker, also even in B2B may affect personal aspects, so the procurement process can be inefficient and ineffective. The present study tries to obtain assessments of efficiency and effectiveness for the e-scouting, and more particularly: 1. strategic evaluation to e-scouting process using a AHP based Decision Support System; 2. method evaluation to e-scouting process comparing Human Decision Maker and AHP based Virtual Decision Maker evaluation. **Methodology** The e-scouting process of a product or service requires the analysis of an information content detected by product or service features; in table 1 we propose a product features breakdown by type and mode of perception. **Table 1.** Classification of the nature of information. | Feature | Type | |---------|---------------| | | Quantitative | Qualitative | | Mode | Objective | Measurement | | of Perception (MOP) | Subjective | - | Judgment | In order to evaluate how the human behaviour can influence the e-scouting process a campaign of experiments was designed, in particular the results of the research and selection process of a given item, which were performed by a number of buyers - Human Decision Makers equal to 51, were compared with those obtained by a Virtual Decision Maker. The selection of a car, in order to renew the fleet of a car rental company, is the object of the e-scouting process; the item was identified taking into account the following aspects: 1. is a finished good with a catalogue of known features; 2. is designed for mass consumption and has features referred in an universal meaning; 3. has significant quantitative and qualitative features; 4. belongs to the scenario of B2B supply, so the contribution of qualitative features is limited and does not exceed that of the quantitative ones. Criteria and sub-criteria which have to be taken in to account to solve the e-scouting problem can be set according to the main features of the desired item (Table 2); we assume that strategic guidelines were previously set by the high level management. We assume also that the preliminary high level analysis performs the problem recognition, defines the minimum requirements description and finds the product specification [3], [4], [5]. The importance assigned by a buyer - decision maker to each criteria and sub-criteria can be defined as the e-scouting strategy. **Table 2.** Criteria, sub-criteria, types, the mode of perception, measurement e judgement | Criteria | Sub-Criteria | Type | MOP | Measurement | Judgment | |------------|--------------|------------|---------|------------------------------------|----------| | Safety | Strength | quantitative | objective | EU NCAP code | - | | | Accessibility | quantitative | objective | Number of doors | - | | Environment| Air pollution | quantitative | objective | Gas specific emission [g CO2/km] | - | | Economy | Overall cost | quantitative | objective | Specific cost [€/km] | - | | Performance| Dynamic | quantitative | objective | Acceleration 0 to 100 km/h [s] | - | | | Utility | quantitative | objective | Luggage capacity [l] | - | | Aesthetic | Design | qualitative | subjective | - | Semantic scale | | | Image | qualitative | subjective | - | Semantic scale | **Table 3.** Qualitative assessment of criteria | Criteria | Sub criteria | Weights | |------------|--------------|------------------| | Safety | Strength, Accessibility | Absolute important | | Environment| Air pollution | Very important | | Economy | Overall cost | Important | | Performance| Dynamic, utility | Almost important | | Aesthetic | Design, image | Less important | Design of experiments Referring to the strategic evaluation of the e-scouting process, a Decision Support System based on Analytic Hierarchy Process methodology, was created to support the buyer-Human Decision Maker to interpret in a coherent way the strategic guidelines previously set by the high level management (Table 4). Regarding to method’s evaluation of e-scouting process, the Human Decision Maker’s was compared with the evaluation of a Virtual Decision Maker based on Analytic Hierarchy Process. Two type of experiments were designed and given after buyers were involved in a short course (2 hours) concerning the introduction of basic concepts of AHP: - **E1.** The objective of the e-scouting process was submitted to the valuations of a first group of 26 buyers; the strategy that leads e-scouting process, in terms of criteria’s weights, were set out in a qualitative manner (Table 4); limits which refer to the variation of the product features were defined in a qualitative manner even if they can be measured quantitatively (Table 5); alternatives that can be assessed belong to the whole those are trade (about 6,000 models and variants of each model); each buyer was provided of an Excel® spread sheet application that contains an empty schema for pairwise comparison among focused criteria and sub-criteria (Table 6) and a routine that enables the general product ranking once the buyer, surfing web, has defined and collected the specifications of all alternatives using only a semantic scale whose intensity belongs to [1, 2..9]. | Product features | Assessment phrase for limit | |------------------|----------------------------| | Dimensions | “...compact” | | Environment | “low emission” | | Economy | “low consumption” | | Performance | “enjoyable to drive” & “comfortable for people and things” | - **E2.** The same objective of the e-scouting process was submitted to the valuations of a second group of 25 buyers; criteria’s weights were set out in a qualitative manner still according to Table 4; limits which refer to the variation of the product features were defined in a strictly quantitative manner if they can be measured (Table 5); each buyer was provided of an Excel® spread sheet application that contains an empty schema for pairwise comparison among focused criteria (Table 6); during the experiment, on the basis of information acquired via web, the buyer can give an opinion relating to the sub criterion of choice (safety, performance, economy of operations, aesthetics perception) by using the semantic scale whose intensity belongs to [1, 2..9], supplied with the software application; thus the assessment of a product is based on the method of comparison with an indefinite and non-limited number of alternatives; with reference to the experiment E1, now buyers know more specifically the sub-criteria and the edges of their variation. Table 5. Quantitative assessment of product features limits | Product features | Limit | |-----------------------------------|------------------------------| | Length | ≤ 400 cm | | Acceleration 0 to 100 km/h | € [8..11] s | | Number of doors | 5 | | Number of seats | 5 | | Luggage capacity | € [200..400] dm³ | | Average annual mileage¹ | 20,000 km | Table 6. Pairwise criteria comparisons | Criteria | Aesthetics | Technique | Economy | Environment | Safety | Weights | |------------|------------|-----------|---------|-------------|--------|---------| | Aesthetics | 1 | 1/3 | 1/5 | 1/7 | 1/9 | 0.033 | | Performance| 3 | 1 | 1/3 | 1/5 | 1/7 | 0.063 | | Economy | 5 | 3 | 1 | 1/3 | 1/5 | 0.129 | | Environment| 7 | 5 | 3 | 1 | 1/3 | 0.262 | | Safety | 9 | 7 | 5 | 3 | 1 | 0.513 | Table 7. Quantitative car features, ranges of variation and semantic attribute | Semantic attribute | EUNCAP | Strength | Safety | Accessibility | Performance | Environment | Economy | Aesthetic | Image | |--------------------|--------|----------|--------|---------------|-------------|-------------|---------|-----------|-------| | | | Number of doors | Acceleration 0 to 100 km/h | Luggage capacity | Gas specific emission | Specific cost | Design | | | | [ ] | [+] | [-] | [s] | [dm³] | [g CO2/km] | [€/km] | [-] | [-] | [-] | | HH | 5 | 5 | <8 | > 900 | <90 | <1 | 9 | 9 | | HM | 4.5 | [8, 8.5] | [900, 800] | [90, 100] | [1.1, 1.2] | 8 | 8 | | HB | 4 | [8.5, 9] | [800, 700] | [100, 110] | [1.2, 1.4] | 7 | 7 | | MH | 3.5 | [9, 9.5] | [700, 600] | [110, 120] | [1.4, 1.6] | 6 | 6 | | MM | 3 | [9.5, 10]| [600, 500] | [120, 130] | [1.6, 1.8] | 5 | 5 | | MB | 2.5 | [10, 10.5]| [500, 400] | [130, 140] | [1.8, 2.0] | 4 | 4 | ¹ An equation enables the buyer to evaluate the overall annual cost which takes in to account: property tax, which depends on car power, price of fuel to 20,000 km per year, environment emission class, price of car and interest rate. | BH | 2 | [10.5, 11] | [400,300] | [140, 150] | [2.0, 2.2] | 3 | 3 | |----|---|-----------|-----------|------------|-----------|---|---| | BM | 1.5 | [11, 11.5] | [300, 200] | [150, 160] | [2.2, 2.4] | 2 | 2 | | BB | 1 | 3 | >11.5 | <200 | >160 | >2.4 | 1 | 1 | The buyers used for the experiments are students of the Master Degree in Engineering Management. To motivate the students-buyers, it was declared that a premium in terms of didactic credits would be assigned to the solution that best interprets the strategic vision, performed in the shortest time [6],[7]. During each experiment, the research environment was represented by the web database and by a search engine provided by an important Italian car magazine; the database contains all the features of all products that market offers; the search engine was used during all experiments with the same potential: it allows buyers to operate query on the database using defined product features as a reading key. To each buyer was given a maximum time of one hour to perform the task. **The virtual decision maker.** The virtual decision maker is based on the AHP methodology; it allows to translate the strategy that inspires the selection criteria and weights of criteria in a holistic manner; the virtual model of e-scouting process includes the following basic steps: (a) modelling strategy that inspires the supply by semantic scale; definition of criteria and relevant weights; definition of sub-criteria and relevant weights according to AHP equation $A*W = \lambda_{\text{max}}*W$ where: $A$ is the pairwise criteria or sub-criteria comparisons matrix; $W$ is the normalized eigenvector of matrix $A$ with the local criteria or sub-criteria weights, $\lambda_{\text{max}}$ is the maximum eigen value of the matrix $A$; (b) recording of conditions in which the product will work (in some cases this allows to calculate the value of a variable that depends from some product features and can be used during the selection process, such as the annual overall cost); (c) registration of supply product feature limits; (d) e-scouting for alternatives and collection of their information content; (e) recording the value of each quantitative feature; the qualitative ones have been evaluated with equal weight in the pairwise comparisons; (f) definition of direct features comparisons for each chosen alternative under each sub-criteria; (g) calculation of general ranking according to $Ri=Ii*Wi$ where $Ri$ is the i-alternative general rating, $Ii$ is the chosen intensity value, $Wi$ is the respective sub-criteria weight [8]. **Findings** Figure 1 shows the criteria’s average weight derived from the interpretation of the supply strategy of Table 4, by the virtual decision maker (VDM) and by the human decision makers (HDM). As regards to tests E1 and E2, chosen by each buyer alternative, was submitted to the Virtual Decision Maker based on Analytic Hierarchy Process; the pairwise comparisons among different alternatives, along each defined sub-criterion, were performed by using direct ratio of quantitative measures derived by technical information resident on web; so it was possible to create, for each k-scenario played by each k-buyer, a ranking $R_{ik}$ with $i \in [1..n_k]$ and $k \in [1..N]$ (where $n_k$ is the number of alternatives chosen by each buyer and $N$ is the number of buyers). Finally it was possible to verify the quality of collected alternatives and if each buyer was able to choose the best alternative from ones she/he evaluated. In order to assess efficiency and effectiveness of the buyer’s e-scouting process, the following performance indicators were defined: - **Absolute Frequency of Matching:** \[ AFM = \frac{n^*}{N} ; \text{ where } N^* \text{ is the number of buyers who chose, among the scouted alternatives, the same of the VDM and } N \text{ the overall number of buyers;} \] - **Weighted Frequency of Matching:** \[ WFM = \frac{\sum_{k=1}^{N} n_k M_k}{n_t} ; \quad \text{where: } n_t = \sum_{k=1}^{N} n_k; \quad M_k = \begin{cases} 1 & \text{if matching VDM – HDM} \\ 0 & \text{otherwise} \end{cases} \] - **Efficiency as average number of scouted alternatives per buyer:** \[ E = \frac{n_k}{n_t}, \text{ where: } n_t = \sum_{k=1}^{N} n_k \] Figures 2 and 3 show the efficiency and effectiveness of each buyer in order to choose the best alternative of each k-scenario. Fig. 2. Absolute Frequency of Matching (AFM), Weighted Frequency of Matching (WFM), Efficiency (E) (Test E2). Fig. 3. Absolute Frequency of Matching (AFM), Weighted Frequency of Matching (WFM), Efficiency (E) (Test E2). Figure 4 shows the comparison between the $i$-alternative rating, $R_i$ (see page 4), which is evaluated by the VDM AHP based, and the judgments declared by human buyers by means of the semantic scale of table 6-7; this behaviour shows how the evaluation of the collected alternatives can be distorted either by human perception and by the limited human capacity to perform consistent pairwise comparisons among alternatives under each sub-criterion. **Fig. 4.** The general ranking of all alternatives, $R_i$, performed by HDM and by VDM (test E1). **Fig. 5.** The general ranking of all alternatives, $R_i$, performed by HDM and by VDM (test E2). Figure 6 shows the rating performed by each buyer according to the test which was played, table 8 summarizes some statistics data of performed tests. ![Performance of buyers along the tests E1 and E2.](image) **Fig. 6.** Performance of buyers along the tests E1 and E2. **Discussion** The results of tests E1, E2 and E3 provide the opportunity for the following considerations: 1. TestE1: the number of buyers is N=26, the overall scouted alternatives is \( n_i = 184 \); when product features are declared qualitatively the efficiency or average number of scouted alternatives per buyer is \( E = 7.08 \) alternatives/buyer; the absolute frequency of matching is AFM= 19.23% and the weighted frequency of matching is WFM=13.59 %; 2. Test E2: the number of buyers is N=25, the overall scouted alternatives is \( n_i = 238 \); when product features are declared quantitatively the efficiency or average number of scouted alternatives per buyer is \( E = 9.52 \) alternatives/buyer; the absolute frequency of matching is AFM= 56.00% and the weighted frequency of matching is WFM=62.61 %; 3. when product features are expressed quantitatively by a measure (Test E2), effectiveness of e-scouting process grows (\( \Delta WFM = +361\% \) \( \Delta AFM = 226\% \)); the average number of analysed alternatives, per buyer, is significantly higher (+22%) than the one is obtained when product features and sub criteria are expressed only qualitatively (test E1); 4. an unbound research process leads to find, generally, alternatives with a worse rating (see Figure 6); 5. buyers follow the strategic mission assigned in a consistent way for both the experiment (see Figure 2); 6. generally, each buyer expresses a personal aesthetic judgement also if the aesthetic master criteria is evaluated as negligible. As part of purchasing a consumer product, with many features that must be evaluated (sub-criteria), it can be concluded that the human contribution, although assisted by decision support tools based on the type AHP methodology, does not produce significant benefits because of the difficulty of evaluating a product taking into account all of those analyzed and because of the limited number of alternatives that are taken into account. In this context, at the same time the growth of alternatives to be evaluated increases the difficulty of analysis and potential effectiveness of the choice; it is therefore concluded that for this class of supplying the electronic contribution and the semantic web are absolutely remarkable. Further experiments are being carried out in order to verify the methodological contribution of AHP to the evolution of process of product selection; in particular further experiments have to be designed in order to repeat the supply experience involving skilled buyers. The aim is to determine the contribution of learning by doing in this area. The research path can also be extended to the configuration of system which characteristics are known by catalogue, for example in the design of industrial plant and civil facilities. The validity of the study is limited to research products with well-known features. **Conclusions** When supply strategy is shared and the limits on quantitative product features are defined effectively, once a consolidated know-how on the item e-scouting process is collected, human behaviour coincides with that of a virtual decision maker that incorporates the steps of the AHP only if supply strategy and selection criteria and sub-criteria are strictly defined. When it doesn’t occur, human scouting can be inefficient and misleading. The result of the study confirms the necessity of automating the process of scouting and selection of products whose technical features are known by catalogue and expressed according to a common semantic. The overall performance improvement of the procurement process is therefore achievable through: (a) the validation of information; (b) the semantic homologation of the product features; (c) the strengthening of informatics tools by the integration of holistic decision making methodologies (AHP). The deepening of the analysis of product scouting and selection can lead the development of management supply systems to obtain the following benefits: (a) the informative flow transferred by artificial intelligence (e.g. improvement the potential of the Semantic Web); (b) the automated implementation of the basic steps of the methodology of AHP where, due to limited resources, this is not allowed (Small and Medium Enterprises); (c) decreasing of subjective judgments, devoting the human contribution to cases where this latter option represents an opportunity rather than a limit. References [1] Kraljic, P., (1983) Purchasing must become supply management, Harvard Business Review 61 (5), 109–117 [2] Saaty, T.L., (1990) How to make a decision: The analytic hierarchy process, European Journal of Operational Research, Volume 48, Issue 1, 5 September 1990, Pages 9-26, ISSN 0377-2217 [3] Robinson, P. J., Faris, C. W., and Wind, Y. (1967) Buying Behaviour and Creative marketing. Boston, Allyn & Bacon [4] De Boer, L. , Van der Wegen, L. and Telgen, J. (1998) Outranking methods in support of supplier selection. Eur. J. Pur. Supp. Manag. 4 , pp. 109-118 [5] Weber, C.A., John R. Current J.R., and Benton W. C., (1991) Vendor selection criteria and methods, European Journal of Operational Research, Volume 50, Issue 1, 7 January 1991, Pages 2-18, ISSN 0377-2217 [6] De Boer, L. , Labro, E. and Morlacchi, P. (2001) A review of methods supporting supplier selection. Eur. J. Pur. Supp. Manag. 7, pp. 75-89 [7] Johnston, W.J. and Lewin, J.E. (1996) Organizational buying behavior: toward an integrative framework. J. Busi. Res. 35, pp. 1-15 [8] Saaty, T. L. & Vargas, L. G. (2001) Models, methods, concepts & applications of the analytic hierarchy process. Boston, Kluwer
Designing RC Snubber Networks Snubbers are any of several simple energy absorbing circuits used to eliminate voltage spikes caused by circuit inductance when a switch — either mechanical or semi-conductor—opens. The object of the snubber is to eliminate the voltage transient and ringing that occurs when the switch opens by providing an alternate path for the current flowing through the circuit’s intrinsic leakage inductance. Snubbers in switchmode power supplies provide one or more of these three valuable functions: - Shape the load line of a bipolar switching transistor to keep it in its safe operating area. - Remove energy from a switching transistor and dissipate the energy in a resistor to reduce junction temperature. - Reduce ringing to limit the peak voltage on a switching transistor or rectifying diode and to reduce EMI by reducing emissions and lowering their frequency. The most popular snubber circuit is a capacitor and a series resistor connected across a switch. Here’s how to design that ubiquitous RC Snubber: **Component Selection:** Choose a resistor that’s noninductive. A good choice is a carbon composition resistor. A carbon film resistor is satisfactory unless it’s trimmed to value with a spiral abrasion pattern. Avoid wirewound because it is inductive. Choose a capacitor to withstand the stratospherically high peak currents in snubbers. For capacitance values up to 0.01 μF, look first at dipped mica capacitors. For higher capacitance values, look at the Type DPP radial-leaded polypropylene, film/foil capacitors. Axial-leaded Type WPP is as good except for the higher inductance intrinsic to axial-leaded devices. The highest Type DPP rated voltage is 630 Vdc and the highest Type WPP voltage is 1000 Vdc. For higher voltages and capacitances, stay with polypropylene film/foil capacitors, choosing the case size you prefer from Types DPFF and DPPS selections. For the smallest case size, choose Type DPPM or DPMF, but realize that these types include floating metallized film as common foils to achieve small size. The use of metallized film reduces the peak current capability to from a third to a fifth of the other high-voltage choices. The selection process is easy in this catalog — peak current and rms current capability is provided with the capacitance ratings. The peak current capability is the dV/dt capability times the nominal capacitance. The rms current capability is the lower of the current which causes the capacitor to heat up 15°C or the current which causes the capacitor to reach its AC voltage. We’ve included dV/dt capability tables to allow you to compare CDE snubber capacitors to other brands. Dipped mica capacitors can withstand dV/dts of more than 100,000 V/μs for all ratings and Type DPPs can withstand more than 2000 V/μs. For high-voltage snubbers, Types DPFF and DPPS handle more than 3000 V / μs, and Types DPMF and DPPM, more than 1000 V/μs. See the table for values according to case length. Assuming that the source impedance is negligible—the worst case assumption—the peak current for your RC Snubber is: \[ I_{pk} = \frac{V_0}{R_s} \] \[ V_0 = \text{open circuit voltage} \] \[ R_s = \text{snubber resistance} \] \[ C_s = \text{snubber capacitance} \] And the peak dV/dt is: \[ \frac{dV}{dt}_{pk} = \frac{V_0}{R_s C_s} \] While for a sinewave excitation voltage, rms current in amps is the familiar: \[ I_{rms} = 2\pi fCVR_{rms} \times 10^6 \] \[ f = \text{frequency in Hz} \] \[ C = \text{capacitance in } \mu F \] \[ V = \text{voltage in Vrms} \] For a squarewave you can approximate rms and peak current as: \[ I_{rms} = \frac{CV_{pp}}{.64 \sqrt{tT}} \] \[ V_{pp} = \text{peak-peak volts} \] and \[ t = \text{pulse width in } \mu s \] \[ V = \text{voltage in Vrms} \] \[ I_{peak} = \frac{CV_{pp}}{.64 tT} \] \[ T=\text{Pulse periods in } \mu s \] **Other Capacitor Types:** Here’s a last word on capacitor choice to help you venture out on your own into the uncharted territory of capacitors not specified for use in snubbers and are not in this section. Realize that metallized film types and high-K ceramic types have limited peak-current and transient withstanding capability, on the order of 50 to 200 V/μs. Polyester has 15 times the loss of polypropylene and is fit only for low rms currents or duty cycles. And, be sure to take voltage and temperature coefficients into account. While a mica’s or a Type DPP’s capacitance is nearly independent of voltage and temperature, by comparison, a high-K ceramic dielectric like Y5V can lose ¼ of its capacitance from room temperature to 50°C (122°F) and lose another ¼ from zero volts to 50% rated voltage. **Quick Snubber Design:** Where power dissipation is not critical, there is a quick way to design a snubber. Plan on using a 2-watt carbon composition resistor. Choose the resistor value so that the same current can continue to flow without voltage overshoot after the switch opens and the current is diverted into the snubber. Measure or calculate the voltage across the switch after it opens and the current through it at the instant before the switch opens. For the current to flow through the resistor without requiring a voltage overshoot, Ohm’s Law says the resistance must be: \[ R \leq \frac{V_o}{I} \quad \text{Vo = off voltage} \quad I = \text{on current} \] The resistor’s power dissipation is independent of the resistance \( R \) because the resistor dissipates the energy stored in the snubber capacitor, \( \frac{1}{2} C_s V_o^2 \), for each voltage transition regardless of the resistance. Choose the capacitance to cause the 2-watt resistor to dissipate half of its power rating, one watt. For two times \( f_s \) transitions per second, the resistor will dissipate one watt when: \[ 1 = \left( \frac{1}{2} C_s V_o^2 \right) (2f_s) \quad f_s = \text{switching frequency} \] \[ C_s = \frac{1}{V_o^2 f_s} \] As an illustration, suppose that you have designed a switchmode converter and you want to snub one of the transistor switches. The switching frequency is 50 kHz and the open-switch voltage is 160 Vdc with a maximum switch current of 5A. The resistor value must be: \[ R \leq \frac{160}{5} = 32 \Omega \] and the capacitance value is: \[ C_s = \frac{1}{(160)^2 (50 \times 10^3)} = 780 \text{ pF} \] **Optimum Snubber Design:** For optimum snubber design using the AC characteristics of your circuit, first determine the circuit’s intrinsic capacitance and inductance. Suppose you were designing a snubber for the same transistor switch as in the “Quick” example. Then on a scope note the ringing frequency of the voltage transient when the transistor turns off. Next, starting with a 100 pF mica capacitor, increase the capacitance across the transistor in steps until the ringing frequency is half of the starting frequency. The capacitance you have added in parallel with the transistor’s intrinsic capacitance has now increased the total capacitance by a factor of four as the ringing frequency is inversely proportional to the square root of the circuit’s inductance capacitance product: \[ f_o = \frac{1}{2\pi \sqrt{LC}} \] So, the transistor’s intrinsic capacitance, \( C_i \), is \( \frac{1}{3} \) of the added capacitance, and the circuit inductance, from the above equation, is: \[ L_i = \frac{1}{C_i (2\pi f_i)^2} \quad f_i = \text{initial ringing frequency} \quad C_i = \text{intrinsic capacitance} \quad (\text{added capacitance}) / 3 \quad L_i = \text{intrinsic inductance} \] When the transistor switch opens, the snubber capacitor looks like a short to the voltage change, and only the snubber resistor is in the circuit. Choose a resistor value no larger than the characteristic impedance of the circuit so that the inductive current to be snubbed can continue unchanged without a voltage transient when the switch opens: \[ R = \sqrt{\frac{L_i}{C_i}} \] You may need to choose an even smaller resistance to reduce voltage overshoot. The right resistance can be as little as half the characteristic impedance for better sampling of the Intrinsic LC circuit. The power dissipated in the resistor is the energy in the capacitance, \( \frac{1}{2} C_s V_o^2 \), times the switching frequency, $f_s$, times the number of voltage transitions per cycle. For example, if your circuit is a half-bridge converter, there are two voltage transitions per cycle and the power in the resistor is: $$P_r = C_s V_o^2 f_s \quad C_s = \text{snubber capacitance} \quad V_o = \text{off voltage} \quad f_s = \text{switching frequency}$$ Choose a snubber capacitance value which meets two requirements: 1) It can provide a final energy storage greater than the energy in the circuit inductance $$\frac{1}{2} C_s V_o^2 > \frac{1}{2} L_i I^2 \quad I = \text{closed circuit}$$ $$C_s > \frac{L_i I^2}{V_o^2}$$ and, 2) it produces a time constant with the snubber resistor that is small compared to the shortest expected on-time for the transistor switch. $$R C_s < t_{on}/10$$ $$C_s < t_{on}/10 R$$ Choosing a capacitance near the low end of the range reduces power dissipated in the resistor, and choosing a capacitance 8 to 10 times the intrinsic capacitance, $C_i$, almost suppresses the voltage overshoot at switch turn off. Try a capacitance at the low end of the range as the initial value and increase it later if needed. Now revisit the “Quick” example with the added data permitting “Optimum” design. You’ve taken some more measurements on your switchmode converter: the ringing frequency of the voltage transient when the transistor switch opens is 44 MHz and an added parallel capacitance of 200 pF reduces the ringing frequency to 22 MHz. The switching frequency is 50 kHz with a 10% minimum duty cycle, and the open-switch voltage is 160 Vdc with a maximum switch current of 5A. So you know the following: $$f_r = 44 \text{ MHz}$$ $$C_i = \frac{200}{3} = 67 \text{ pF}$$ $$f_s = 50 \text{ kHz}$$ $$t_{on} = 0.1/(50 \times 10^3) = 2 \mu s$$ $$V_o = 160 \text{ Vdc}$$ $$I = 5 \text{ A}$$ And calculate the circuit inductance: $$L_i = \frac{1}{(67 \times 10^{-12})(2\pi 44 \times 10^6)^2} = 0.196 \mu H$$ And the snubber resistor value: $$R = \sqrt{\frac{0.196}{67}} (10^{-3}) = 54 \Omega$$ Before you can calculate the resistor power dissipation, you must first choose the snubber capacitance: $$\frac{L_i I^2}{V_o^2} < C_s < \frac{t_{on}}{10 R}$$ $$\frac{(0.196 \times 10^{-6})(5)^2}{(160)^2} < C_s < \frac{2 \times 10^{-6}}{(10)(54)}$$ $$192 < C_s < 3700 \text{ pF}$$ Since power dissipation in the resistor is proportional to the capacitance, choose a standard capacitance value near the low end of the above range. For a 220 pF capacitor and two transitions per cycle, the power dissipation in the resistor is: $$P_r = (220 \times 10^{-12})(160)^2(50 \times 10^3) = 0.2 \text{ W}$$ Comparing the “Quick” design to the “Optimum” design, you see that for the same converter switch the required snubber resistor’s power capability was reduced by a factor of 5, from 1 W to 0.2 W, and the snubber capacitance was reduced by a factor of 3.5, from 780 pF to 220 pF. This was possible because the additional circuit measurements revealed that the source impedance was actually 54 Ω rather than 32 Ω, and that the circuit inductance permitted a smaller capacitance to swallow the circuit’s energy. Usually the “Quick” method is completely adequate for final design. Start with the “Quick” approach to prove your circuit breadboard, and go on to the “Optimum” approach only if power efficiency and size constraints dictate the need for optimum design. **NOTE:** For more on RC snubber design, for RCD snubber design, and for snubber design using IGBT snubber modules, get the application note, “Design of Snubbers for Power Circuits,” at www.cde.com Notice and Disclaimer: All product drawings, descriptions, specifications, statements, information and data (collectively, the “Information”) in this datasheet or other publication are subject to change. The customer is responsible for checking, confirming and verifying the extent to which the Information contained in this datasheet or other publication is applicable to an order at the time the order is placed. All Information given herein is believed to be accurate and reliable, but it is presented without any guarantee, warranty, representation or responsibility of any kind, expressed or implied. Statements of suitability for certain applications are based on the knowledge that the Cornell Dubilier company providing such statements (“Cornell Dubilier”) has of operating conditions that such Cornell Dubilier company regards as typical for such applications, but are not intended to constitute any guarantee, warranty or representation regarding any such matter – and Cornell Dubilier specifically and expressly disclaims any guarantee, warranty or representation concerning the suitability for a specific customer application, use, storage, transportation, or operating environment. The Information is intended for use only by customers who have the requisite experience and capability to determine the correct products for their application. Any technical advice inferred from this Information or otherwise provided by Cornell Dubilier with reference to the use of any Cornell Dubilier products is given gratis (unless otherwise specified by Cornell Dubilier), and Cornell Dubilier assumes no obligation or liability for the advice given or results obtained. Although Cornell Dubilier strives to apply the most stringent quality and safety standards regarding the design and manufacturing of its products, in light of the current state of the art, isolated component failures may still occur. Accordingly, customer applications which require a high degree of reliability or safety should employ suitable designs or other safeguards (such as installation of protective circuitry or redundancies or other appropriate protective measures) in order to ensure that the failure of an electrical component does not result in a risk of personal injury or property damage. Although all product-related warnings, cautions and notes must be observed, the customer should not assume that all safety measures are indicated in such warnings, cautions and notes, or that other safety measures may not be required.
European Commission Seventh Framework Programme Theme ICT-1-1.4 (Secure, dependable and trusted infrastructures) ICT-216026-WOMBAT Worldwide Observatory of Malicious Behaviors and Attack Threats Requirements Analysis and Specification | Workpackage: | WP2 | |--------------|-----| | Deliverable: | D05 (D2.3) | | Date of delivery: | 30/06/2008 | | Version: | Final | | Responsible: | NASK | | Authors: | NASK with contribution from: FORTH, TUV, VUA, POLIMI, FT, EURECOM, HISPASEC, SYMANTEC | | Data included from: | FORTH, TUV, VUA, POLIMI, FT, EURECOM, HISPASEC, SYMANTEC | | Contact: | email@example.com | | | firstname.lastname@example.org | Executive Summary This document outlines the requirements for early warning systems built on technology provided by the WOMBAT project, setting out both: functional and non-functional requirements. The collected requirements reflect the identified user needs and the key directions to be followed within the research and development Work-packages (WP3-Data Collection and Distribution, WP4-Data Enrichment and Characterization, WP5-Threat Intelligence). The document starts from an assessment of user requirements gathered from potential users including external participants in the Closed Workshop and the WOMBAT development group. This part covers expectations of distinct classes of data users such as: security vendors, malware researchers, ISPs, CERT teams, Government, financial institutions and home users. It details the requirements for the system architecture, data and system functions, and specifies performance, availability and security features to provide sufficient functionality. It also defines user interface, testing and configuration management requirements. # TABLE OF CONTENTS 1 INTRODUCTION ........................................................................................................... 5 1.1 Scope .................................................................................................................. 5 1.2 Requirements Taxonomy .................................................................................... 5 1.3 Requirements Prioritization ............................................................................... 6 1.4 Document Overview .......................................................................................... 6 2 GENERAL INFORMATION .......................................................................................... 7 2.1 Users Characteristics .......................................................................................... 7 2.2 Input Systems ..................................................................................................... 7 2.3 Assumptions, Dependencies and Constraints .................................................. 9 3 DATA CONSUMERS REQUIREMENTS .................................................................. 11 3.1 Security Vendors and Malware Researchers ...................................................... 11 3.2 Internet Service Providers .................................................................................. 12 3.3 CERTs ................................................................................................................ 12 3.4 Banks .................................................................................................................. 13 3.5 Government ....................................................................................................... 13 3.6 Business Users (Network and Systems Managers) / Administrators .............. 14 3.7 General Public .................................................................................................... 14 4 FUNCTIONAL AND DATA REQUIREMENTS .......................................................... 15 4.1 Data Collection and Distribution ....................................................................... 15 4.1.1 Architecture of the Infrastructure .............................................................. 15 4.1.2 Data Sensors Design and Deployment ....................................................... 17 4.1.3 Input Data and Information ......................................................................... 19 4.1.4 Data Repository ........................................................................................... 26 4.2 Data Enrichment and Characterization ............................................................. 26 4.3 Threats Intelligence ........................................................................................... 28 4.4 Data Output ....................................................................................................... 29 5 NON-FUNCTIONAL REQUIREMENTS .................................................................... 31 5.1 System Environment .......................................................................................... 31 5.2 Integration with Other Systems .......................................................................... 31 5.3 System Performance ........................................................................................... 31 5.4 Reliability and Availability ................................................................................ 32 5.5 Security and Privacy ........................................................................................... 32 5.6 Usability .............................................................................................................. 33 5.7 Scalability .......................................................................................................... 33 6 USER INTERFACE ...................................................................................................... 34 6.1 API Design ......................................................................................................... 34 6.2 Data Displaying and Graphical Visualisation .......................................................... 34 7 TESTING AND EVALUATION .................................................................................. 36 8 CONFIGURATION MANAGEMENT ............................................................................. 38 APPENDIX A .................................................................................................................. 38 REFERENCES ................................................................................................................ 46 1 INTRODUCTION 1.1 Scope This is the requirements specification document for the WOMBAT system. Its purpose is to provide a collection of statements to form research directions for the WOMBAT project. The requirements specified here are based on several inputs: Description of Work (DoW) document [5], outcome from the Closed Workshop (April 21-22, Amsterdam), input from several informal discussions among the project consortium as well as opinions and expectations of potential WOMBAT users (i.e.: ISPs, CERTs, antivirus companies, security researchers, security-conscious organizations and home users). This document covers users, functional, data, non-functional as well as testing and configuration management requirements. At the same time, this document is intended to specify a kind of a “road map” that gives descriptive research directions for the project. Therefore, this document is not intended to supersede the DoW, but serve as a reminder of potential consumer expectations. It will help us to focus on providing solutions that will attempt to address in the best way the ideal functionality that is expected for such a system. 1.2 Requirements Taxonomy 1.3 Requirements Prioritization For the purpose of requirements prioritization, within the entire document we distinguish the following classes of requirements: - ESSENTIAL - DESIRABLE - OPTIONAL This document specifies research solution requirements as a combination of prior agreed requirements specified in DoW document (as deliverables) and a collection of expectations of WOMBAT potential users, including comments received from the participants of the Closed Workshop in Amsterdam (April 21-22). We will interpret the above classes of requirements as follows. The “ESSENTIAL” requirements are critical points that must be accomplished by an operational system based on WOMBAT-like components. In this spirit, requirements specified according to DoW are obligatory. However, for the remaining “ESSENTIAL” requirements, given the time and cost constraints of such a research project, we envision that some of the components developed or the overall integration may not meet these requirements, particularly when stability and performance are concerned. Any requirement classified as “DESIRABLE” would enhance the system, but is not essential for the project. An “OPTIONAL” requirement is facultative for the project. Note: Requirements taken directly from DoW document are shadowed. 1.4 Document Overview This section provides an overview of the entire document. This document describes data, functional and behavioral requirements for the system to be developed within the WOMBAT project. This document is structured as follows. Chapter 1 gives brief information about the scope of this document. Also, it provides the taxonomy of requirements and the requirements prioritization method. Chapter 2 provides characteristics of main end users of the new system. This chapter also lists existing systems that will provide main data input for the system. Additionally, it specifies assumptions and some constraints on the system development. Chapter 3 characterizes targeted audience and their expectations from a new system formalized as user requirements. Chapter 4 defines data requirements as well as operational and functional requirements for the activity of the system, including: the system architecture, requirements for design and deployment of its sensors, the type of information that has to be acquired from different kinds of sensors, and requirements for data repository, as well as requirements for the results of data enrichment and threat intelligence processes. Chapter 5 specifies non-functional requirements, i.e. constraints on the system design and implementation. Chapter 6 specifies requirements for API of the WOMBAT system. Chapter 7 proposes requirements for system testing and evaluation. Chapter 8 describes requirements for WOMBAT configuration management. 2 GENERAL INFORMATION 2.1 Users Characteristics Main end users of the WOMBAT system will include: - **Security vendors**: auditing and consulting services, provide and develop anti-malware and other computer security solutions and tools (products), malware and vulnerability analysis, malware collection; (representation by Symantec, Hispasec Sistemas) - **ISPs**: provide consumers or businesses with access to the Internet and related services, provide web hosting, domain name registration, collocation, Internet transit, also provide security issues to their customers; (representation by France Telecom, NASK) - **CERT teams**: respond to security incidents occurring in the Internet, cooperate with other CERTs and ISPs, provide secure contact to report an incident, analyze the state of the Internet security, provide incident reaction and prevention, provide security information and warnings as well as education and training; (representation by NASK/CERT Polska) - **Banks**: provide financial services to their customers via Internet: banking, investment, brokering, etc.; (representation by Clearstream, EAB) - **Government**: regulations about telecommunication and the Internet, provide public information and government office services to citizens via webpage; - **Researchers**: research and teaching (education), development of ideas related to a broad range of theoretical and practical aspects of computer security and privacy issues (Internet threat analysis, intrusion detection, cryptography, etc.); (representation by France Telecom R&D, Institut Eurécom, NASK, FORTH, Politecnico di Milano, Technical University Vienna i Vrije Universiteit Amsterdam) - **General public**: theoretical customers of most of above users (especially security vendors, ISPs, banks, government – supplicant), typical website viewers and Internet surfers. 2.2 Input Systems **Basic sources** of information about threats and malicious events used in the WOMBAT project will include: - **DeepSight**, Symantec The Symantec DeepSight Threat Management System and Symantec Managed SecurityServices [4] consists of more than 40,000 sensors monitoring network activity in more than 180 countries and comprehensively tracks attack activity across the entire Internet. Additionally, Symantec gathers malicious code data along with spyware and adware reports from over 120 million client, server, and gateway systems that have deployed Symantec’s antivirus products and opted into sharing such reports within the agreed upon terms of privacy and anonymization. Some information delivered by the DeepSight will be shared within WOMBAT project through an XML-based proprietary API. - **Leurré.com (SGNET)**, Institut Eurécom The Leurré.com project [8] operated by Institut Eurécom is based on a broad network of honeypots covering more than 30 countries. The architecture consists of a distributed network of low-interaction honeypots (based on honeyd), medium-interaction honeypots (based on the ScriptGen technology in order to enrich the network conversations with the attackers) and a central server. Honeypots of each partner monitor three unused IP addresses. All traces captured on each platform are uploaded on a daily basis into a centralized central relational database. Some data delivered by system will be provided to WOMBAT. - **Argos, VUA** Argos [3] is a full and secure system emulator designed for use in honeypots. It is based on Qemu, an open source emulator that uses dynamic translation to achieve a fairly good emulation speed. Argos extends Qemu to enable it to detect remote attempts to compromise the emulated guest operating system. Using dynamic taint analysis it tracks network data throughout execution and detects any attempts to use them in an illegal way. When an attack is detected the memory footprint of the attack is logged. Activities in Argos sensors can be captured and analyzed in the context of WOMBAT. - **Honey@Home, FORTH** Honey@Home [6] is a honeypot-based system designed to gathering and analyzing information on cyber-attacks that uses home-users hosts as sensors. It is designed to manage and lightweight on system resource usage. Honey@Home forwards traffic to unused IP addresses or ports of home-user host to a honeypot farm and forwards the replies back to the attacker. It runs in the background of a home-user computer. - **Anubis (Analyzing Unknown Binaries), International Secure Systems Lab (Vienna University of Technology, Eurecom France, UC Santa Barbara)** Anubis [1] is a tool for analyzing the behavior of Windows executables with special focus on the analysis of malware. To this end, the binary executable is run in an emulated environment and its (security-relevant) actions are monitored. The generated report includes detailed data about modifications made to the Windows registry or the file system about interactions with the Windows Service Manager or other processes and it logs all generated network traffic. In the context of WOMBAT, this tool will help to characterize malware and will be useful in the process of Threat Intelligence. - **ARAKIS, NASK / CERT Polska** ARAKIS [2] is the nation-wide early warning system built by CERT Polska that collects and correlates data from a wide variety of sources including low-interaction honeypots, firewalls, antivirus systems and darknets. The system is oriented towards detection and characterization of new threats based on the automated analysis of captured honeypot payloads and supporting data from other sources. Activity observed by ARAKIS’s sensors and analysis performed by the system can be partially used by WOMBAT via API. - **VirusTotal, Hispasec** VirusTotal project [9] offers a free service for scanning suspicious files using several antivirus engines. Companies, institutions, organizations and individuals can submit malware samples that are scanned by the VirusTotal service using more than 30 different antivirus products. The number of malware samples received by the service is more than twenty thousands per day. This malware collection will be very useful for WOMBAT. 2.3 Assumptions, Dependencies and Constraints There are a number of critical issues which ESSENTIAL be either considered during the early stages of the project development or addressed at a later phase to make reaching the goals of the project feasible. They include: - **Data collection, access to data, requests for data** Data collection is the starting point for further analysis. One potential major obstacle may be the lack of completeness of the collected data. This may be the result of (a) legal issues that force original sources to anonymize parts of the data offered, (b) weaknesses of the original data collection systems or (c) problems related to coverage. One concern is where to put the collection points for a better coverage. Also, collection points may be detected by attackers and avoided by them. Moreover, using sensors that operate only on honeypots (regardless of whether they are server-side or client-side) may result in a failure of detection and collection of attacks, as honeypot configuration is usually different from that of production systems. User behavior may also play a critical factor in enabling an attack to become feasible, something not possible to reproduce on a honeypot system. Although aggregated and anonymized data would be valuable and easier to share, there is a strong need for raw data to enable meaningful research, especially for the purpose of scientific and multi-perspective analysis. Sharing data is the core problem. For the global collaboration within the project, there is a need to establish both formal and semi-formal agreements between institutions (or individuals) that will cover privacy and legal issues concerning exchanging data and using it. In this case, any limitations on the use of the data concerning privacy issues should be identified and documented. - **System architecture** The basic structure and type of WOMBAT architecture which is proposed in section 4.1 will evolve, with changes depending on the data shared in the system. In the data collection infrastructure some data will be held centrally (under a central agreement), while other data will be held by individual organizations under separate agreements. Centralization or standardization of formats would be difficult to introduce. A common API requires consensus between safety and flexibility. For the system to scale in the future, standardization is necessary. However, a ‘start small, think big’ approach is more practical. Scalability is in fact a critical issue at all levels of the WOMBAT system, as the wealth of collected information will be huge, making analysis time-consuming and heavy resource wise. - **Technology and tools** To obtain meaningful research results, there is a need for adequate analysis techniques. The lack of advanced technology and appropriate analysis tools would limit the scope of research and ability to obtain meaningful results. Code analysis is a hard problem. Also, the true heads behind malicious activity may hide behind layers of indirection that makes it difficult to get to them. A major challenge is to provide a translation mechanism to interpret the collected data from WP3 to the model of WP4. It will be important to address the completeness and soundness of the translation. Other questions must also be taken into account, including: whether the translation can be totally automatable or requires some configuration. One of the major assumptions for the WOMBAT system is that most possible behavior patterns can be reliably extracted from malware, so that subsequent algorithms get to operate on meaningful data. Problems here may include anti-analysis methods employed in malware, such as the detection of an emulator. Resilience to such methods should be taken into account at an early stage of system development. At the data collection level, specific system configuration or user interaction may make malware function in a different manner, making the extraction of such behaviors difficult in an artificial environment. - **Testing** Testing methods and tools should be created for independent assessment of security software. Most tests and in particular for AV products are not really independent making the result less trustworthy and quite variable. Moreover, actual test procedures simply confront the products to a huge pool of known malware and deliver percentages thereby solely assessing signature-based detection. More complete assessment should consider additional aspects such as resilience to unknown malware which is also true for the results of the project [7]. 3 DATA CONSUMERS REQUIREMENTS This chapter characterizes targeted audience and their expectations for the WOMBAT system. Among the project’s targeted audience there are several distinct groups (such as security vendors, malware researchers, ISPs, CERTs, Banks, Governments and others) with different expectations for the system. The following requirements, which are specified separately for each distinct group of data users, are based on the presumed usage of the new system. They are also the basis for the specification of the most of the functional (Chapter 4) and partially also of the non-functional requirements (Chapter 5). Among functional requirements, Section 4.1 defines various kinds of data collected from different sources – these correspond to the data requirements explicitly listed in this chapter and/or provide basis for the further analysis. Section 4.2 specifies the new system functionality which satisfies many of the requirements listed below that are not satisfied by raw data and basic statistics. Section 4.3 refers to the most advanced requirements defined below. Requirements included in this chapter, which do not request specific KID (Knowledge, Information, Data), concern presentation and addresses data output (Section 4.4) as well as some of the non-functional requirements. Requirements included in the following tables reflect presumed expectations from the WOMBAT system of its potential users. These requirements are the result of the WOMBAT consortium members’ research, informal interviews with representatives of some of the user groups and input from the closed WOMBAT workshop. 3.1 Security Vendors and Malware Researchers Table U1. Requirements for the WOMBAT system from the point of view of security vendors and malware researchers | NO. | USER REQUIREMENTS DESCRIPTION | PRIORITY | |-----|-----------------------------------------------------------------------------------------------|----------| | [U1-1] | Providing access to malware samples (for selected users only) and enable sharing of such samples based on agreed upon access procedures. | ESSENTIAL | | [U1-2] | For a given malware sample, providing any available metadata, including descriptions, geographical statistics, time, etc. as well as – when available – analysis logs with recorded system calls, their arguments and collection point (for malware analysis) | ESSENTIAL | | [U1-3] | Allowing for automated signature generation methods | OPTIONAL | | [U1-4] | Acting as part of the users threat collection infrastructure | OPTIONAL | | [U1-5] | Allowing for the possibility of searching for information for a given malware sample based on basic characteristics (MD5/SHA, file length, simple behavioral characteristics like port numbers, etc.) | ESSENTIAL | | [U1-6] | Allowing for threat intelligence analysis enabling identification of root causes of attacks and prediction of attack vector changes | DESIRABLE | | [U1-7] | Providing feedback about any identified false positives to the original information source | OPTIONAL | ### 3.2 Internet Service Providers **Table U2. Requirements for the WOMBAT system from the point of view of ISPs** | Requirement | Description | Priority | |-------------|-----------------------------------------------------------------------------|----------| | [U2-1] | Providing information about current threats useful from the point of view of client support (for example customer call centers), particularly results of semi-automatic malware analysis including but not limited to infection symptoms, known malware removal procedures, information about patches blocking the vulnerabilities used by the threat and workarounds if patches are not available | ESSENTIAL | | [U2-2] | Providing threat signatures to enable filtering of known malicious traffic | DESIRABLE | | [U2-3] | Enabling users to place sensors within their own networks to observe statistics of attacks that threaten their own customers and to gain additional knowledge about those threats from the threat analysis and intelligence carried out by the system | OPTIONAL | | [U2-4] | Providing port activity and other statistics, if collected (netflow records, ...) | ESSENTIAL | | [U2-5] | Provide pro-active protection measures and self-care cleaning support for ISP customers | DESIRABLE | | [U2-6] | Measure and assess the impact of the threat on real-time traffic such as VoIP and IPTV | DESIRABLE | | [U2-7] | Measure and assess the impact of the threat on the networking infrastructure (routers, switches, firewalls, DSLAMs, BRAS, SBCs, ...) and associated services (RADIUS, DNS, DHCP, ...) | DESIRABLE | | [U2-8] | Measure and assess the impact of the threat on boxes (e.g. ADSL home routers) and terminals (e.g. phones) | DESIRABLE | ### 3.3 CERTs **Table U3. Requirements for the WOMBAT system from the point of view of CERTs** | Requirement | Description | Priority | |-------------|-----------------------------------------------------------------------------|----------| | [U3-1] | Providing information about attacks and malware originating from or targeting the IP range of the CERT constituency | ESSENTIAL | | [U3-2] | Providing threat signatures accessible in an automated way (API) | OPTIONAL | | [U3-3] | Providing threat statistics, including port activity | ESSENTIAL | | [U3-4] | Providing information of any significant correlation identifying groups operating in the user's area | DESIRABLE | | [U3-5] | Providing early warning about new identified threats, both malware and exploits | DESIRABLE | | [U3-6] | Enabling tracking of activity of malicious groups using the threat intelligence capability and enhancing digital forensics results with correlated results from other analyses | OPTIONAL | | [U3-7] | Support information exchange with peers (other CERTs, FIRST, ...) | DESIRABLE | ### 3.4 Banks **Table U4. Requirements for the WOMBAT system from the point of view of Financial Institutions** | Requirement | Description | Priority | |-------------|------------------------------------------------------------------------------|----------| | [U4-1] | Providing information about any malware specifically targeting the user or his clients, including results of semi-automated analysis. This information will be based on profiles that would have to be supplied. Profiles could be, for example, IP ranges or domain names. | ESSENTIAL | | [U4-2] | Providing information about any known phishing attempts targeting the user’s clients | DESIRABLE | | [U4-3] | Providing information about any correlations between different malicious activities identifying groups engaging in phishing targeting banks and/or CC number theft, warning the user about any new identified activity of such groups | OPTIONAL | | [U4-4] | Providing threat signatures for the user’s network devices, most importantly including signatures of threats against HTTP servers used in e-banking | OPTIONAL | | [U4-5] | Enabling checking for malware-infected or otherwise suspicious pages on the user’s site, active alerting is preferred | DESIRABLE | | [U4-6] | Enabling finding other banks targeted by the same group to allow joint action against the attack | OPTIONAL | | [U4-7] | Providing information about new vectors of attack | DESIRABLE | ### 3.5 Government **Table U5. Requirements for the WOMBAT system from the point of view of Government** | Requirement | Description | Priority | |-------------|------------------------------------------------------------------------------|----------| | [U5-1] | Providing information about malicious behavior targeting national security of any country (cyber-terrorism) | DESIRABLE | | [U5-2] | Providing proper authentication of data sources | ESSENTIAL | | [U5-3] | Providing available information about groups behind the malware, including probable locations (using information such as IP address statistics, languages, etc.) | DESIRABLE | | [U5-4] | Enabling checking for any known phishing attempts or malware infections on governmental sites | DESIRABLE | | [U5-5] | Providing information about malware and attack attempts against Government sites. Government institutions interested in getting notifications should provide the project with patterns to be detected. | DESIRABLE | | [U5-6] | Allowing reports about general trust in e-commerce, general levels of phishing attempts, id theft etc | DESIRABLE | | [U5-7] | Providing information about new vectors of attack | DESIRABLE | 4 FUNCTIONAL AND DATA REQUIREMENTS In this chapter, we define data requirements as well as operational and functional requirements for the activity of the WOMBAT system. First three sections of this chapter correspond directly to Workpackages WP3, WP4 and WP5 of the WOMBAT’s DoW document. The last section of this chapter refers to the output of the WOMBAT system, i.e. information that is expected to be generated as the result of the carried research. 4.1 Data Collection and Distribution This section refers to the WP3 Work-package and describes: the type of the WOMBAT architecture, requirements for its sensors design and development, kind of data to be acquired from sensors. The objective is to improve malware samples collection. 4.1.1 Architecture of the Infrastructure The WOMBAT infrastructure will be based on the various existing sensors developed within previous projects, including honeypots (Leurre.com, VirusTotal, NoAH) and attack detection systems (DeepSight Threat Management System, ARAKIS) (see Section 2.3, basic resources) as well as new sensors implemented and deployed within the WOMBAT project (see Section 2.3, external resources). Existing sensors are, however, mostly passive ones, while new sensors will include mid-interaction honeypots, such as web crawlers that actively seek malware on the Internet, and Scriptgen/Argos solutions. New sensors will also include wireless and Bluetooth sensors. The basic WOMBAT infrastructure will consist of the following elements: **Input interface**: to which sensors will feed their alert data and will in turn be available to authorized entities that request them. In order to preserve the architecture and functionality of the currently deployed sensors, these sensors will be offered the choice of selecting how they interface into the presentation architecture. The possible choices include: - Email - data can be sent via email to an mail address dedicated for collecting alert notifications from monitoring sensors. - FTP/SCP - data can be uploaded to a central repository. - Web Service - data can be uploaded via a request to a web service developed for the purpose of gathering alert data. - HTTP - data can be uploaded by simply using HTTP requests. For any of the existing sensors that has any other (reasonable) way to distribute data, the client part of could be additionally developed. **Database management system**: (e.g. mysql) to store in a centralized way the alert data gathering from the existing sensors. The database will hold all the data and metadata collected. Because of the heterogeneity of the data collected by the existing sensors extra caution must be taken during the design of the database schema in order to be easily extensible. In addition, the database schema will also take into consideration the fact that new forms of data may appear during the WOMBAT project. **Communication channel with the client applications**: this outgoing channel, as opposed to the incoming ones, will have only one form (e.g. web service, HTTP). In addition to communication, this layer will also take care of the different user roles, as described in the draft, which are home users, security vendors, ISPs, etc. GUI for the WOMBAT’s database: the formal GUI for the database will be in the form of a web interface. It will provide the users with many visual representations of the current threats on the Internet. 4.1.2 Data Sensors Design and Deployment The following requirements concern deployment of existing sensors as well as design and deployment of new sensors that will provide data to the WOMBAT system. Table R1. Requirements concerning WOMBAT sensors | NO. | SYSTEM REQUIREMENT DESCRIPTION | PRIORITY | |-----|-------------------------------------------------------------------------------------------------|----------| | [R1-1] | There will be a specification of the interface between WOMBAT and other input (basic and external) systems (see 2.3). | DESIRABLE | | [R1-2] | Communication layer between WOMBAT and existing individual sensors will be based on that sensors’ native data distribution protocol (if practical and not overly complicated). For example, methods for transferring data will include: email, ftp, scp, http, etc. | DESIRABLE | | [R1-3] | If several protocols are available for a given sensor and at least one of them is also used by other sensors in WOMBAT, then that protocol will be used to minimize implementation effort. | DESIRABLE | | [R1-4] | New sensors will have a unified way of distributing data (e.g. web services). | DESIRABLE | | [R1-5] | If one of the protocols used by existing sensors meets the needs of new sensors, then that protocol will become the standard. | DESIRABLE | | [R1-6] | Some protocols are common and will be supported by WOMBAT: | | | | a. Email - data can be sent via email to an e-mail address dedicated for collecting alert notifications from monitoring sensors. | DESIRABLE | | | b. FTP/SCP - data can be uploaded to a central repository. | DESIRABLE | | | c. Web Service - data can be uploaded via a request to a web service developed for the purpose of gathering alert data. | DESIRABLE | | | d. HTTP - data can be uploaded by simply using HTTP requests. | DESIRABLE | | [R1-7] | Sensors using other protocols will be accepted and the client part of the interface will be provided by WOMBAT, subject to practicality constraints. | OPTIONAL | | [R1-8] | The reason why data was collected will be clearly identified by WOMBAT [AV detection | honeypot | suspicious-behavior | suspicious source | …] | DESIRABLE | | [R1-9] | WOMBAT will include sensors that work on production systems. | OPTIONAL | | [R1-10] | WOMBAT will have an assessment that determines whether something developed within a project (e.g. that provides data to the repository) is sufficiently mature to be released publicly. | DESIRABLE | | [R1-11] | **Improving tandem crawler technology** (i.e. improving the quality and capabilities of client-side threat sample collection technologies of honey-crawlers): | | | | a. Improving metrics for suspicion in behavioral deviations between infected and clean crawlers | ESSENTIAL | | | | |---|---| | b. | Improving metrics for confidence in benign deviations among clean crawlers | ESSENTIAL | | c. | Refining management framework to improve consistency of behavior among clean crawlers | ESSENTIAL | | d. | Leveraging machine learning techniques to more effectively prioritize potentially malicious deviations | ESSENTIAL | | e. | Leveraging machine learning to more reliably and more accurately cluster similar deviations | ESSENTIAL | | f. | Leveraging background and side-ground technologies and infrastructures to more effectively target the crawler toward frequently malicious sites or generally benign sites as needed | ESSENTIAL | | g. | Re-architecting the software for efficiency to compress (five) physical machines into virtual machines that fit within reasonable desktop computing hardware so that the technology is easily used by partners | ESSENTIAL | 4.1.3 Input Data and Information The WOMBAT system is intended to collect the wide diversity of data with as much details as possible to provide the meaningful and multi-perspective analysis of different Internet threats. Thus, among data and information to be gathered from WOMBAT sensors there will be: collections of malware, logs from firewalls and IDS sensors, honeypot-based information, darknets, alerts from early warning systems, and also (however considered as a future input) information from mobile devices and RFID. The following tables define requirements to characterize features to be provided upon particular data collections. **Table R2. Malware Collections** | Requirement | Description | Priority | |-------------|-----------------------------------------------------------------------------|----------| | [R2-1] | Information acquired from malware collections | ESSENTIAL| | [R2-2] | Malware file sent to WOMBAT should be packed & protected (example: in a ZIP file with password 'infected', an industry non-official standard) | ESSENTIAL| | [R2-3] | Provide metadata: hashes of the original malware file [MD5 & SHA1[SHA256] (not only MD5). This information may be useful even without actual samples if enough information is present to identify the sample (hashes, filesize). | ESSENTIAL| | [R2-4] | Provide timestamp of collection | DESIRABLE| | [R2-5] | Provide source id should include some degree of detail (keeping it anonymized) about the source of that sample, even in general groups like 'trusted malware researcher', 'CERT', 'honeypot', 'user', etc | ESSENTIAL| | [R2-6] | Provide basic metadata of the sample including original filename, if renamed | DESIRABLE| | [R2-7] | Provide source / infection vector [www | e-mail | p2p | document | exploit | bluetooth...] | DESIRABLE| | [R2-8] | Provide reason for being collected [AV detection | honeypot | suspicious-behavior | suspicious source | ...] | DESIRABLE| | [R2-9] | Provide the associated information, if any (example: AV detection [engine-name, version, malware-name], www [URL])) | DESIRABLE| | [R2-10] | Contain extra info given by certain tools and AV products (i.e. Norman Sandbox reports, PE structure info, etc) | OPTIONAL| | [R2-11] | Monitor submissions for frequent repetition of the same sample, therefore frequent detections would be seen, opening the chance of detecting outbreaks | OPTIONAL| | [R2-12] | Monitor the submission system for statistical anomalies | OPTIONAL| **Table R3. Firewalls** | Requirement | Description | Priority | |-------------|-----------------------------------------------------------------------------|----------| | [R3-1] | Information acquired from firewalls | ESSENTIAL| | [R3-2] | Include source name and location | ESSENTIAL| | [R3-3] | Source IP of packet, protocol, src port, dst port, timestamp. Destination IP's may be anonymized. If anonymized, the system should indicate if there is a way to reverse the anonymization process and how. | ESSENTIAL| | [R3-4] | Source IPs will have the following information attributed to them: | |--------|---------------------------------------------------------------| | | a. Country location | DESIRABLE | | | b. ISP | DESIRABLE | | | c. Autonomous system number | DESIRABLE | | [R3-5] | All the information will be available in near real-time | DESIRABLE | | [R3-6] | Search capability will be available allowing for a search with IP or time information as key parameters | DESIRABLE | ### Table R4. IDS sensors | [R4-1] | Information acquired from IDS sensors | DESIRABLE | |--------|---------------------------------------------------------------|-----------| | [R4-2] | Include source name and location | DESIRABLE | | [R4-3] | If malware is found and reported it will allow for the possibility of supplying the information as specified in Malware Collections requirements, including: | |--------|----------------------------------------------------------------------------------------------------------------------------------| | | a. IPs contacted, if any (possibly used for C&C) | DESIRABLE | | | b. Exploit used if identified | DESIRABLE | | [R4-4] | All the information will be available in near real-time. | DESIRABLE | | [R4-5] | If the sensor is network-based: | |--------|-------------------------------------------------------------------------------------------------------------------------------| | | a. Source IP address of the logged event, protocol, src port, dst port, timestamp. | ESSENTIAL | | | b. A passive or active fingerprint identifying the OS system of the attacker will also be present. | DESIRABLE | | | c. Destination IPs will be anonymized | DESIRABLE | | | d. A description of the alert, in case of misuse-based sensor | ESSENTIAL | | | e. Description of an alert will be machine readable in some way, in case of misuse-based sensor | DESIRABLE | | | f. A threat level or threat probability, in case of anomaly-based sensor | ESSENTIAL | | | g. Packet content considered "unusual" will be supplied (potentially new exploits) in pcap format if possible along with flow information, in case of anomaly-based sensor | DESIRABLE | | | h. These packets will be screened so that their packet content does not disclose any private information, in case of anomaly-based sensor | DESIRABLE | | | i. If this is not possible, then such information does not need to be supplied (but it will be supplied), in case of anomaly-based sensor | OPTIONAL | | | j. Source IPs will have the country location information attributed to them, in case of anomaly-based sensor | DESIRABLE | | [R4-6] | If the sensor is host-based: | |--------|-------------------------------| | a. | A description of the logged event | ESSENTIAL | | b. | Information related to the application or process which triggered it | DESIRABLE | | c. | A description of the target host | ESSENTIAL | | d. | A description of the alert, in case of misuse-based sensor | ESSENTIAL | | e. | Description of an alert will be machine readable in some way, in case of misuse-based sensor | DESIRABLE | | f. | A threat level or threat probability, in case of anomaly-based sensor | ESSENTIAL | | g. | Activity traces considered "unusual" will be supplied (potentially new exploits) | DESIRABLE | | h. | These activity traces will be screened so that their content does not disclose any private information, in case of anomaly-based sensor | DESIRABLE | | i. | If this is not possible, then such information does not need to be supplied (but it will be supplied), in case of anomaly-based sensor | OPTIONAL | | j. | Source IPs, if any obtained at this level, will have the country location information attributed to them, in case of anomaly-based sensor | DESIRABLE | | [R5-1] | Information acquired from Honeypots sensors | ESSENTIAL | |--------|---------------------------------------------|-----------| | [R5-2] | Source name and location | ESSENTIAL | | [R5-3] | Source IP of the event (packet), protocol, src port, dst port, timestamp registered by the honeypot. | ESSENTIAL | | [R5-4] | A fingerprint identifying the OS system of the attacker will also be present | DESIRABLE | | [R5-5] | Destination IPs will be anonymized | DESIRABLE | | [R5-6] | Source IPs will have the following information attributed to them: | | | | a. Country location | DESIRABLE | | | b. ISP | DESIRABLE | | | c. Autonomous system number | DESIRABLE | | [R5-7] | Malware found information (if any), should comply with the Malware Collections requirements and additionally: | | | | a. IPs contacted, if any (possibly used for C&C) | DESIRABLE | | | b. Exploit used if identified | DESIRABLE | | | c. Optionally any traffic conversations recorded | DESIRABLE | | [R5-8] | Packet content considered "unusual" will be supplied (potentially new exploits) in pcap format if possible along with flow information | DESIRABLE | | [R5-9] | These packets will be screened so that their packet content does not disclose any private information. If this is not possible, then such information does not need to be supplied | DESIRABLE | | [R5-10]| All the information will be available in near real-time | DESIRABLE | | [R5-11]| Search capability will be available allowing for a search with IP information and timestamps as key parameters | DESIRABLE | | [R5-12]| Any other types of data, if available, such as models used for detection of threats (e.g. ScriptGen), memory dumps, traces of exchanges, ... | DESIRABLE | | [R6-1] | Information acquired from honeyclients | ESSENTIAL | |--------|---------------------------------------|-----------| | [R6-2] | Source name and location of the honeyclient | ESSENTIAL | | [R6-3] | URL considered malicious or suspicious in case of Web served malware | ESSENTIAL | | [R6-4] | The method by which URL was observed, such as web crawl with specific search parameters, spam URL, spam URL, user submission, other | ESSENTIAL | | [R6-5] | Any associated exploit information if recognized (at least a name of the exploit if possible) | DESIRABLE | | [R6-6] | Alert information pertaining to whether this URL successfully exploited latest patched versions of the OS | ESSENTIAL | | [R6-7] | Malware found information will be in compliance with the Malware Collection requirements and additionally provide: IPs contacted, if any (possibly used for C&C) | DESIRABLE | | [R6-8] | Associated URL information will be present | | | a. | Country location | DESIRABLE | | b. | ISP | DESIRABLE | | c. | Autonomous system number | DESIRABLE | | d. | List of IPs seen pointing to the site | DESIRABLE | | e. | First seen and last seen timestamp | DESIRABLE | | f. | Whois information associated with the URL | DESIRABLE | | g. | Whether suspected drive-by-download or not | OPTIONAL | | h. | Whether phishing URL or not | DESIRABLE | | [R6-9] | All the information will be available in near real time | DESIRABLE | | [R6-10] | Search capabilities will be available regarding data stored by the honeyclients, such as memory dumps, interaction traces, … | DESIRABLE | ### Table R7. Darknets | [R7-1] | Information acquired from darknets | DESIRABLE | |--------|-----------------------------------|-----------| | [R7-2] | Source name and location | ESSENTIAL | | [R7-3] | Source IPs of query, protocol, src port, dst port, timestamp. Destination IPs will be anonymized. This information will be available only "on demand", as darknet datasets are very large, through a search option | DESIRABLE | | [R7-4] | Aggregated statistics will be available that show at least one of: amount of flows or packets or bytes in set periods for a darknet source | ESSENTIAL | | [R7-5] | Associated IP information will be present: a. Country location b. ISP provider c. Autonomous system number | OPTIONAL | | [R7-6] | All the information will be available in near real time | DESIRABLE | | [R7-7] | Alerting information from a darknet source will be available, such as automated notifications about sharp increases in traffic for example or some specific alerts associated with a particular solution | OPTIONAL | ### Table R8. Mobile Devices | [R8-1] | Information acquired from mobile devices | DESIRABLE | |--------|-----------------------------------------|-----------| | [R8-2] | Source name and location | DESIRABLE | | [R8-3] | For Bluetooth/WiFi/WiMAX-based sensors: a. The source address of the logged event (which should be anonymized, as it allows to track unique devices), service used, timestamp b. A passive or active blueprinting of the attacking system will also be present c. The sensor will supply captured data in a suitable format d. If the service is a file-transfer service, the sensor will supply the transferred file | DESIRABLE | ### Table R9. RFID | [R9-1] | Information acquired from RFID | DESIRABLE | |--------|-------------------------------|-----------| | [R9-2] | Include data (tag ID, reader ID, tag data, timestamps) from large-scale RFID deployments | DESIRABLE | | [R9-3] | Sharing of custom-written software modules | DESIRABLE | | [R10-1] | Information acquired from Early Warning Systems | DESIRABLE | |---------|-------------------------------------------------|-----------| | [R10-2] | Source name and location | DESIRABLE | | [R10-3] | Early warning system information will be supplied to WOMBAT as soon as an EWS detects suspect activity | DESIRABLE | | [R10-4] | Alerts produced by the system along with relevant associated information: | | | | a. Source IPs involved, protocol, src port, dst port, timestamp. Destination IPs will be anonymized | DESIRABLE | | | b. Payload information in pcap format if available | DESIRABLE | | | c. Any associated information specific to the EWS (such as snort alerts, EWS operator comments, threat signatures, detection models etc) | DESIRABLE | | | d. Clear criteria expressing why this alarm was generated and what it may mean | DESIRABLE | | [R10-5] | Any IP information supplied will have the following associated with it: | | | | a. Country location | DESIRABLE | | | b. ISP provider | DESIRABLE | | | c. Autonomous system number | DESIRABLE | | [R10-6] | Warnings prioritisation and confidence | DESIRABLE | 4.1.4 Data Repository The database of the WOMBAT system is intended to store all the relevant aggregated data as well as various types of metadata that will come out as a result of the data enrichment and characterization process. Requirements for the data repository have to address the way of storing the diverse data collected, as well as security and privacy considerations concerning data storage and data access. First issue will be satisfied by functional requirements that are listed in the table below. Second issue requires non-functional requirements which are specified in Section 5.5 and also in Section 6.1 that describes API design. **Table R11. Requirements for Data Repository** | [R11-1] | WOMBAT will be able to share data provided by sensors (both existing and added in the future) | ESSENTIAL | |---------|---------------------------------------------------------------------------------------------|-----------| | [R11-2] | Data gathered from the sensors will be partially stored in the centralized way (central Data Repository). Remaining data will be kept at partners sites and will be accessible through a specially designed set of interfaces. | ESSENTIAL | | [R11-3] | Database management system (e.g. mysql) will be set up to hold all the data and metadata collected | ESSENTIAL | | [R11-4] | The system will be able to classify and store the heterogenic data from different sensors, that may vary greatly | ESSENTIAL | | [R11-5] | The database schema will provide some extensibility to address both the heterogeneity of the data collected by the existing sensors and the fact that new forms of data may appear during the WOMBAT project | DESIRABLE | | [R11-6] | The outgoing communication channel from database to client application or GUI will have one particular form (e.g. Web Service, HTTP) | DESIRABLE | 4.2 Data Enrichment and Characterization This section refers to the WP4 work-package of the WOMBAT project, particularly to WP4.1 (Code behavior), WP4.2 (Code structure), WP4.3 (Code context). The objective of WP4 is to develop techniques to characterize the malicious code collected in WP3, deriving from it metadata that might reveal insights into the origin of the code and the intentions of those that created, released or used it. The two main types of information are: (i) information about the actions of the code and its structure, (ii) and information about the context in which the code sample was collected. The following table specifies a set of requirements for WP4. **Table R12. Requirements for Data Enrichment and Characterization Process** | [R12-1] | WOMBAT will provide a specification language (meaningful properties) to describe the behavior of machine-executable code (D 4.1) | ESSENTIAL | |---------|-----------------------------------------------------------------------------------------------------------------|-----------| | [R12-2] | WOMBAT will characterize the behavior of malicious code that is collected in database (D 4.2) | ESSENTIAL | | [R12-3] | WOMBAT will provide characteristics of certain structures of malware code (i.e. PE structure, hashes of sections, entropy of that sections, CFG, etc.) (D 4.3 and D. 4.4) | ESSENTIAL | | [R12-4] | WOMBAT will provide ways to use the properties ([R12-1]), together with contextual information to identify the miscreant behind malicious activity. The contextual information such as the country of origins of the attacks, timing, targets, etc. and results of [R12-2] and [R12-3] (D 4.5 and D 4.6) | ESSENTIAL | | [R12-5] | WOMBAT will provide integration and correlation of different features used to describe malicious code, also with contextual information (D 4.7) | ESSENTIAL | | [R12-6] | In the context of [R12-1], WOMBAT will provide new malware models (using e.g. grammars) to describe their behavior through their actions on the system. A model will be independent from the OS and the programming language used to create the malware. | DESIRABLE | | [R12-7] | WOMBAT will allow static analysis of the malware's code (e.g. CFG analysis) with a controlled complexity (important, since this is a computationally intensive task) according to the available resources. | DESIRABLE | | [R12-8] | WOMBAT will be able to identify and classify malware | ESSENTIAL | | [R12-9] | WOMBAT will provide information from AV-engines about suspicious or malware binary files (how malware samples are detected by a list of antivirus vendors) | DESIRABLE | | [R12-10] | WOMBAT will be able to identify certain malware samples related directly to online fraud | DESIRABLE | | [R12-11] | WOMBAT will be able to provide URLs related to malware or online fraud | DESIRABLE | | [R12-12] | WOMBAT will provide metadata, including: attack signatures, code behavior, structure of the malicious code | DESIRABLE | | [R12-13] | WOMBAT will cluster together code exhibiting similar behavior | DESIRABLE | | [R12-14] | Because of [R12-4], WOMBAT will provide information whether two code samples are related by origin | DESIRABLE | | [R12-15] | WOMBAT will provide a phylogeny of code through static analysis of the binaries | DESIRABLE | | [R12-16] | WOMBAT will create a model to infer behavior from code structure and phylogeny | DESIRABLE | | [R12-17] | WOMBAT provide automatic inference of specification from binaries to the analysis of malware (e.g. by applying techniques of software engineering) | DESIRABLE | | [R12-18] | WOMBAT will correlate and aggregate data from various sources and different types (binaries, firewall logs, honeynets data, etc.) (i.e. malware executables opens or connect to characteristic port will be correlate with statistics about network traffic to/from this port and countries of source/destination connections, etc.) | DESIRABLE | 4.3 Threats Intelligence This section refers to the work-package WP5 of the WOMBAT project which aims at understanding the root causes of the observed attacks, to better predict upcoming threats. This knowledge will form the basis for development of an early warning system. Requirements for the type of required analysis, models and techniques as well as the expected processing and analysis results are defined in the following table. Table R13. Requirements for Threat Intelligence Process | Requirement | Description | Priority | |-------------|------------------------------------------------------------------------------|----------| | [R13-1] | WOMBAT will be able to identify root causes of attacks by extracting the modus operandi of attackers from groups of related metadata using graph-based techniques and other data mining algorithms. | ESSENTIAL | | [R13-2] | WOMBAT will be able to use and enhance models of normal malicious behavior to identify new emerging types of threats. | ESSENTIAL | | [R13-3] | WOMBAT will use the clustering of seemingly unrelated data resulting from root cause analysis to detect stealthy malicious activities like multithreaded slow worms. | ESSENTIAL | | [R13-4] | Assessing the quality of the results of all root cause analysis techniques implemented. | ESSENTIAL | | [R13-5] | WOMBAT will include an Early Warning System (based on understanding of root causes of the attacks observed) to predict upcoming threats. The system will issue context-rich alerts, with references to similar activity in the past. | ESSENTIAL | | [R13-6] | Assessing the quality of the results of WP4 by using EWS developed | ESSENTIAL | | [R13-7] | WOMBAT will provide advanced search capabilities, that is, given some information about a piece of malware, it will be able to quickly query for related pieces (to do correlation). | ESSENTIAL | | [R13-8] | WOMBAT will find patterns of related behavior. | DESIRABLE | | [R13-9] | WOMBAT will identify shared code fragments between malware, indicative of common authorship. | DESIRABLE | | [R13-10] | WOMBAT will identify origins of malware (where was it hosted, how are victims lured there). | DESIRABLE | | [R13-11] | WOMBAT will enable real-time analysis along with a recording mechanism to restore the system after an attack. | DESIRABLE | | [R13-12] | WOMBAT will search for general characteristics of different malware families, detecting patterns to protect against future mutations | OPTIONAL | | [R13-13] | WOMBAT will allow to find crossing information regarding infrastructure used by malware. | DESIRABLE | | [R13-14] | WOMBAT will use reincidence in the usage of certain infrastructure to identify malicious resource providers, or to warn non-malicious ones about the abuse of their infrastructure. | DESIRABLE | | [R13-15] | WOMBAT will develop and use new malware models to evaluate the detection capabilities of the tools for detecting malware propagation | DESIRABLE | 4.4 Data Output This section specifies requirements for the results that are expected to come out from the threat data and information analysis within the WOMBAT project. In particular, they will include characteristics of threats. Results of analysis will be generated (amongst other information) from metadata stored in Data Repository (WP3) as well as from the output of Data Enrichment and Characterization (WP4) and Threats Intelligence (WP5) process. WOMBAT is intended to form the basis of a future worldwide early warning system, thus the potential result expected to be generated and distributed are: up-to-date information about new types of Internet security threats as well as ready-for-use attack and malware signature updates. The details of the information to be provided are listed in the following tables. Table R14. Requirements for Threat Characteristics | Requirement | Description | Priority | |-------------|-----------------------------------------------------------------------------|----------| | [R14-1] | Rankings and statistics of current and past malicious activity which will be divided (and correlated) by port number, type of system (HN, DN, etc.), country/ASN, vulnerability exploited, etc. | DESIRABLE | | [R14-2] | Rankings and statistics of current and past malicious activity will be (thereafter) also distinguished by different data displayed on x-axis (to describe malicious behavior) such as: numbers of input or output flows, numbers of unique source or destination IP | DESIRABLE | | [R14-3] | Information and statistics about attackers’ OS (this information could be by packets analysis or malware binaries characterization) | OPTIONAL | | [R14-4] | Information/metadata and statistics about origins of attacks, methods of attacks, all other results of characterization | DESIRABLE | | [R14-5] | Information will be provided in a table format or plain text. | OPTIONAL | | [R14-6] | Information about origins of malware | DESIRABLE | | [R14-7] | Shared code fragments | DESIRABLE | | [R14-8] | Models of malicious behavior and activity | DESIRABLE | | [R14-9] | Results of assessment of malware and attacks’ impact | DESIRABLE | | [R14-10] | Results of all static and dynamic analysis | DESIRABLE | | [R14-11] | Results of analysis of malware code presented in CFG (Control Flow Graph) | OPTIONAL | Table R15. Requirements for Early Warnings of Security Threats | Requirement | Description | Priority | |-------------|-----------------------------------------------------------------------------|----------| | [R15-1] | Early warnings will come from the system of alarms | DESIRABLE | | [R15-2] | Alarms will be differentiated in terms of their kind and priority | ESSENTIAL | | [R15-3] | Alarms will indicate detection of new threats or attacks, anomalies, increase of malicious activity | DESIRABLE | | [R15-4] | Alarms will inform about new vulnerabilities | DESIRABLE | Table R16. Requirements for Virus and Attack Signatures Updates | [R16-1] | Meaningful (correlated and contextual) information and metadata | ESSENTIAL | |---------|---------------------------------------------------------------|-----------| | [R16-2] | Descriptions of classes of malware related behavior | DESIRABLE | | [R16-3] | Signatures of attacks | DESIRABLE | | [R16-4] | Signatures will describe behavior on different levels, like network level (payloads of flows), or system/host level (memory dumps, system resources access, system registry in Windows, etc.) | OPTIONAL | | [R16-5] | Signatures will be deliverable in different standards (to limit software, that could used this signatures) | OPTIONAL | | [R16-6] | Universal models of malicious behavior and activity | DESIRABLE | Table R17. Requirements for Security Practices Updates | [R17-1] | Suggested security practices based on threat characteristics, warnings of security threats as well as (if generated) any clusters and signatures (example: block port X because of Y) | ESSENTIAL | |---------|-------------------------------------------------------------------------------------------------|-----------| | [R17-2] | Security Practices will be deliverable via system Security Messages and reports (both periodic and instant), or other type of presentation and dissemination forms (news, articles, blog entries, RSS feed, notes and comments, newsletter via email) | DESIRABLE | 5 NON-FUNCTIONAL REQUIREMENTS This chapter specifies constraints on the WOMBAT system design and implementation, including: the kinds of operating systems that should be supported, issues concerned system integration, the set of its performance parameters. Since the system is intended to assure high interaction, on-line availability as well as sensitive and critical information protection, also requirements such as reliability, backup and recovery as well as security and privacy considerations, concerning information storage, processing and transfer, are carefully specified. Other non-functional requirements relate the ease of the system usage and specify sizing, scaling needs to meet planned growth. 5.1 System Environment Table R18. Requirements for System Environment | [R18-1] | The core of the system will be based on Unix or a Unix-like operating system, the sources, including sensors may use any operating system based on requirements for a given task. | DESIRABLE | | [R18-2] | The system's GUI will support standard or de-facto standard web components, and be accessible from all major operating systems and browsers, including at least Microsoft Windows, Linux, MacOS X, Internet Explorer, Firefox and Safari. | DESIRABLE | | [R18-3] | The system will be modular. | DESIRABLE | 5.2 Integration with Other Systems Table R19. Requirements for Integration with Other Systems | [R19-1] | WOMBAT will include format conversion software (to convert data from other systems) | DESIRABLE | | [R19-2] | WOMBAT will use XML as a preferred format for data exchange with external systems unless the system already offers an interface using another format. | OPTIONAL | | [R19-3] | The confidentiality and integrity of communications will be specified whenever applicable. | DESIRABLE | 5.3 System Performance Table R20. Requirements for System Performance | [R20-1] | WOMBAT will support hundreds of simple queries per minute, where simple queries are defined as access to centrally stored data using typical, predefined queries. | DESIRABLE | | [R20-2] | Response time for typical queries using only the central database will not exceed 10 seconds, response time for simple custom queries can be longer, but will not exceed 30 seconds. | DESIRABLE | | [R20-3] | WOMBAT will support hundreds of simultaneous users. | DESIRABLE | 5.4 Reliability and Availability Table R21. Requirements for System Environment | [R21-1] | The system will store raw data for at least a week and aggregated data for at least a month, the times may vary depending on the importance of data. Shorter data retention is possible as an exception only if necessary. Data retention will be adjusted according to legal and social requirements. | DESIRABLE | | [R21-2] | The system will be robust – failures of individual sensors or even aggregated data sources may not cause a system failure. | ESSENTIAL | | [R21-3] | The system's MTBF will be at least one month for failures repairable under one hour and three months for more serious failures. | DESIRABLE | | [R21-4] | Allowable down time of the system will not exceed one full day per month. | DESIRABLE | | [R21-5] | Routine activities related to system administration such as backup, user management and sources management will be performed without down time. | DESIRABLE | | [R21-6] | In case of maintenance: (i) accepting of new queries will be stopped, (ii) the running queries will be completed, (iii) the down time will be announced at the web page. | ESSENTIAL | 5.5 Security and Privacy Table R22. Requirements for Security and Privacy | [R22-1] | The system will prevent access to personal data (including IP numbers) of targets and sources of attacks as well as of the information sources by regular, non-privileged users. | ESSENTIAL | | [R22-2] | The system will protect malware samples form being accessed by non-privileged users. | ESSENTIAL | | [R22-3] | User access to the database will be restricted to the API, such that user rights cannot be ignored. | ESSENTIAL | | [R22-4] | The API will restrict access to data depending on user rights. | ESSENTIAL | | [R22-5] | Access level for users will be managed by the WOMBAT consortium. | ESSENTIAL | | [R22-6] | Access to data classified as publicly available (high-level statistics, etc.) will not require a decision by the consortium ("guest users"). | DESIRABLE | 5.6 Usability Table R23. Requirements for Usability | [R23-1] | The system will be highly interactive. | DESIRABLE | | [R23-2] | The system will be convenient to use. | DESIRABLE | | [R23-3] | Usability tests with potential users will be performed. | OPTIONAL | | [R23-4] | The presentation of results will be clear. | DESIRABLE | | [R23-5] | Two separate views will be available in the GUI – basic, for regular users seeking general information about threats and threat statistic, and expert, for advanced users performing tasks such as malware analysis. | DESIRABLE | 5.7 Scalability Table R24. Requirements for Scalability | [R24-1] | The system will be able to support data collection from at least 50 aggregated sources in the future, potentially reaching tens of thousands of individual sensors. | DESIRABLE | | [R24-2] | The system will be able to store and process terabytes of raw data (tens of thousands of malware samples, millions of flows daily) | DESIRABLE | | [R24-3] | Aggregation of data will be performed to avoid recomputing typical statistics on demand. | DESIRABLE | | [R24-4] | The system will also be able to access raw data from individual sources without storing it locally | DESIRABLE | 6 USER INTERFACE WOMBAT is expected to collect a wealth of various data and information from heterogeneous systems (infrastructures and networks) and generate high quality results in the subsequent steps. This requires complete, friendly, helpful and effective user-oriented output for users. Such a user interface ESSENTIAL provide and aggregate all collected and generated KID (knowledge, information and data), statistics, results of different kinds of performed analysis and other results of Threat Intelligence. However, privacy aspects ESSENTIAL be considered with care. User interface ESSENTIAL support critical KID and privacy protection through using different dissemination level. 6.1 API Design The API will be the lowest level interface, that provides support requests from outside of the WOMBAT system for all KID kept in database. Table R25. Requirements for API Design | [R25-1] | API will be an intermediate system between data repository layer and output/clients | ESSENTIAL | |---------|-------------------------------------------------------------------------------------|-----------| | [R25-2] | Access to the data will be governed by legal agreements singed between the consortium and the entity that requests access to the data. Depending on the time of agreement different entities will get different types of access. | ESSENTIAL | | [R25-3] | Support secure connections between systems and secure data transfer | DESIRABLE | | [R25-4] | Provide API-client software, that will communicate to API from client side | DESIRABLE | | [R25-5] | API will use Web Services system (WSDL, SOAP, etc.) or compatible/similar system for future systems | OPTIONAL | 6.2 Data Displaying and Graphical Visualisation The Graphical User Interface (GUI) will provide project results in a user-friendly graphical layout. Table R26. Requirements for GUI | [R26-1] | GUI will provide users with visual representations such as: Top IP addresses, top ports, attacks in the last hour, etc. | ESSENTIAL | |---------|---------------------------------------------------------------------------------------------------------------|-----------| | [R26-2] | GUI will enable correlation through common attributes of different types of datasets and results of analysis. This may be as simple as checking for IPs across different input systems, or more complex such as looking for similarities across different code. | DESIRABLE | | [R26-3] | GUI will be based on the End-User-API (may communicate with system resources via End-User-API) | OPTIONAL | | [R26-4] | GUI will be provided via web-page (web-page may communicate with Web-Services used by End-User-API) | OPTIONAL | | [R26-5] | GUI will support dissemination levels and provide different user accounts | DESIRABLE | | [R26-6] | GUI will sanitize information (IPs, host names, etc.). Level of sanitization ESSENTIAL depend on type of user account | DESIRABLE | | [R26-7] | GUI will allow for system administration (only for superusers), and will provide a useable interface to display project results (for users and superusers) | DESIRABLE | | [R26-8] | GUI will be platform and application independent | DESIRABLE | | [R26-9] | GUI will provide different type of KID visualisation: a. User-interactive (zooming, selection, etc.) rankings of network activity, malware activity, etc. divided/grouped by different kinds of data (honeynets, darknets, viruses, etc.) b. Tables with statistics of different kinds of data (flow stats, cluster stats, darknet stats, malware stats, etc.) c. Binary and/or malware visualization solutions d. Visualizations of system status (status of sensors, subsystems, hardware, etc.) e. Parallel coordinate plots for different type of data f. Graphs to describe malware and network activity behavior g. Alerts map to visualize system alarms and system status if any h. Auto-generated periodic reports | DESIRABLE | | [R26-10] | GUI will provide different layouts: the most advanced - for computers (PCs) and limited - for mobile devices (smart phones, palmtops, etc.) | DESIRABLE | | [R26-11] | GUI will provide community tools to help aggregate and share information, thoughts and ideas between members of consortium (power users), such as forums, blogs and wiki | DESIRABLE | | [R26-12] | Community tools will be used also to provide information to public users via the official website | OPTIONAL | 7 TESTING AND EVALUATION The following table of testing requirements defines what ESSENTIAL/DESIRABLE/OPTIONAL be checked during the WOMBAT system’s testing procedures. System tests will cover both functional and non-functional requirements. Functional requirements are defined as a set of software deliverables to be proposed and implemented in work-packages WP3, WP4 and WP5. Non-functional requirements define quality of a developed solution by evaluation of the features defined in subsections 5.3-5.7. Table R27. Requirements for the WOMBAT System Testing | | Success indicators of the system functional requirements design and implementation: | |---|----------------------------------------------------------------------------------| | [R27-1] | a. Observation of a collective growth of over 20%* of malware samples during the first year of operation | | b. Observation of a collective growth of over 100%* of malware samples during 3 years of operation | | [R27-2] | a. Qualification of each malware by at least 5 contextual information at the end of 2nd year of the project | | b. Qualification of each malware by at least 8 contextual information at the end of the project | | [R27-3] | a. Specification of one threat intelligence model at the end of the 2nd year of the project | | b. Implementation of threat intelligence model at the end of the project | | [R27-4] | Coverage will be assessed using a defined methodology according to well-defined criteria. The methodology will consider different contexts (e.g. both known and unknown threats) and should remain independent from any editors/ organizations. | | | Efficiency and scalability: | | [R27-5] | Efficiency and scalability will be analyzed in context of different data source and data source type numbers. | | [R27-6] | Efficiency will be analyzed considering size of data stored in repositories proposed in the system architecture. | | [R27-7] | Scalability will be analyzed considering number of concurrent users of the system. | | [R27-8] | Requirements for network bandwidth allocation will be tested and estimated. | | | Reliability: | | [R27-9] | Fault tolerance to particular system nodes failures will be analyzed. | | [R27-10] | System immunity to potential side-effects of unknown malware analysis will be tested. | | | Security and privacy: | | [R27-11] | It will be checked whether it is forced that only legitimate users may have access to data and algorithms made available to them due to some approved security policy. | DESIRABLE | | [R27-12] | It will be checked whether there is no leak of private data to the central repository against privacy policies of the individual data repositories and their owners. | DESIRABLE | | **Other:** | | | | [R27-13] | The system will have a document with proposed plan of tests. | DESIRABLE | | [R27-14] | All results from the test procedures will be gathered in written test reports. | DESIRABLE | | [R27-15] | During the project, some procedure will be established for notification, discussion, and feedback for error reports. | DESIRABLE | | [R27-16] | System implementation will involve some software project management and bug-tracking software. | OPTIONAL | | [R27-17] | All the modules will be tested separately before the final tests of the whole system. | DESIRABLE | | [R27-18] | Each module will implement unit tests to enable semi-automatic testing of per module changes’ influence for the proper functioning of other modules’ behavior. | DESIRABLE | * percentage of additional malware collected related to the volume of collected malware in individual project participant’s databases 8 CONFIGURATION MANAGEMENT Configuration management defines management of the software during development and testing processes of the WOMBAT solution and management of the configuration of already completed software during its normal future exploitation. Requirements for the configuration management of the WOMBAT can be considered either as a requirements for a software project or for a deployed solution. Corresponding requirements are presented in the following table. Table R28. Requirements for WOMBAT Testing | | For a software project: | | |---|----------------------------------------------------------------------------------------|---| | [R28-1] | Modules of the system developed separately by project participants will have separate private repositories | DESIRABLE | | [R28-2] | Modules of the system developed separately by project participants will use some versioning system | OPTIONAL | | [R28-3] | Modules of the system developed by participants as a common effort will have established shared repository with some software version control system | ESSENTIAL | | [R28-4] | Apart from production deployment WOMBAT will be also installed on dedicated hardware provided by participants and configured to create the testing environment | ESSENTIAL | | | For a deployed solution: | | |---|----------------------------------------------------------------------------------------|---| | [R28-5] | System will have single configuration stored in one place which describes: - legitimate users with authentication information, - run-time parameters, - list of modules which constitute current configuration | DESIRABLE | | [R28-6] | System will have flexible updating procedure for future versions of the system modules | OPTIONAL | APPENDIX A The following table (Table R-U) is a reference, providing an overview of priorities and reasons for inclusion for all the system requirements. The second column validates the need for a given requirement and contains a list of reasons for inclusion. The reasons can be: - **FORMAL** – the requirement is a direct consequence of the Description of Work document and is in fact one of the project’s goals – possibly minor, but formally specified; - **INTERNAL** – the requirement follows from best practices and is necessary for the project to reach completion, also used for basic requirements whose absence would make most of the user requirement impossible to achieve; - **[user requirement number]** – the requirement is necessary to fulfill a user requirement (in many cases only the most important links are listed), can also be used with INTERNAL to signify that an especially strong relationship exists between this user requirement and the system requirements. The third column shows the priority of the requirement (this information is also present in the main document and is repeated here for reference only). Table R-U. The WOMBAT System and Users Requirements Associations | NO. OF SYSTEM REQUIREMENT | REASON FOR INCLUSION | REQUIREMENT PRIORITY | |---------------------------|----------------------|----------------------| | **4. FUNCTIONAL AND DATA REQUIREMENTS** | | | | **4.1 Data Collection and Distribution** | | | | [R1-1] | INTERNAL, [U1-4], [U2-3], [U5-2] | DESIRABLE | | [R1-2] | INTERNAL, [U1-4], [U2-3], [U5-2] | DESIRABLE | | [R1-3] | INTERNAL | DESIRABLE | | [R1-4] | INTERNAL | DESIRABLE | | [R1-5] | INTERNAL | DESIRABLE | | [R1-6] | INTERNAL, [U1-4], [U2-3], [U5-2] | DESIRABLE [a,b,c,d] | | [R1-7] | INTERNAL, [U1-4], [U2-3], [U5-2] | OPTIONAL | | [R1-8] | INTERNAL, [U1-2], [U1-5], [U4-7], [U5-2], [U6-1] | DESIRABLE | | [R1-9] | [U1-4], [U2-3] | OPTIONAL | | [R1-10] | INTERNAL | DESIRABLE | | [R1-11] | [a-e] FORMAL, [U1-2], [U1-4] [f] FORMAL, INTERNAL, [U1-2], [U1-4] [g] FORMAL, INTERNAL | ESSENTIAL [a-g] | | [R2-1] | FORMAL, [U1-1], [U1-2], [U1-5], [U4-1], [U5-5], [U6-1] | ESSENTIAL | | [R2-2] | INTERNAL | ESSENTIAL | | [R2-3] | [U1-2], [U1-5], [U4-1], [U6-1] | ESSENTIAL | | [R2-4] | [U1-2], [U1-5], [U4-1], [U6-1] | DESIRABLE | | [R2-5] | [U5-2] | ESSENTIAL | | [R2-6] | [U1-2], [U1-5], [U4-1], [U6-1] | DESIRABLE | | [R2-7] | [U1-2], [U1-5], [U4-1], [U6-1] | DESIRABLE | | [R2-8] | [U1-2], [U1-5], [U4-1], [U5-2], [U6-1] | DESIRABLE | | [R2-9] | [U1-2], [U1-5], [U4-1], [U6-1] | DESIRABLE | | [R2-10] | [U1-2], [U1-5], [U4-1], [U6-1] | OPTIONAL | | [R2-11] | [U1-2], [U1-5], [U4-1], [U6-1] | OPTIONAL | | [R2-12] | [U1-3], [U2-5], [U3-1], [U3-3], [U3-5], [U4-7], [U5-6], [U5-7], [U7-2] | OPTIONAL | | [R3-1] | FORMAL, [U2-2], [U2-4], [U3-1], [U3-3], [U5-5], [U6-3], [U7-1] | ESSENTIAL | | [R3-2] | [U5-2], [U6-1], [U3-1] | ESSENTIAL | | [R3-3] | [U2-4], [U3-1], [U6-3] | ESSENTIAL | | [R3-4] | [U2-4], [U3-1], [U6-3], [U5-6] | DESIRABLE [a,b,c] | | [R3-5] | [U3-5], [U4-7], [U5-7] | DESIRABLE | | [R3-6] | [U3-1], [U3-3], [U6-3] | DESIRABLE | | [R4-1] | FORMAL, [U2-2], [U3-1], [U3-5], [U5-5], [U6-3], [U7-1] | DESIRABLE | | [R4-2] | [U3-1], [U5-2], [U6-1] | DESIRABLE | | [R4-3] | [U1-2], [U1-3], [U1-4], [U1-5], [U2-1], [U2-2], [U3-1], [U3-6], [U4-1], [U4-3], [U5-3], [U5-5], [U5-7], [U6-3], [U7-2] | DESIRABLE [a,b] | | [R4-4] | [U3-5], [U4-7], [U5-7] | DESIRABLE | | [R4-5] | [U1-2], [U1-7], [U2-4], [U3-1], [U4-7], [U5-1], [U5-2], [U5-6], [U5-7], [U6-3] | ESSENTIAL [a,d,f] DESIRABLE [b,c,e,g,h,j] OPTIONAL [i] | | [R4-6] | [U1-2], [U1-7], [U2-4], [U3-1], [U4-7], [U5-1], [U5-2], [U5-6], [U5-7], [U6-3] | ESSENTIAL [a,c,d,f] DESIRABLE [b,e,g,h,j] OPTIONAL [i] | | [R5-1] | FORMAL, [U2-2], [U2-4], [U3-1], [U3-3], [U5-5], [U6-3], [U7-1] | ESSENTIAL | | [R5-2] | [U3-1], [U5-2], [U6-1] | ESSENTIAL | | [R5-3] | [U3-1], [U6-3] | ESSENTIAL | | [R5-4] | [U3-5], [U5-7] | DESIRABLE | | [R5-5] | INTERNAL | DESIRABLE | | [R5-6] | [U2-4], [U3-1], [U6-3], [U5-6] | DESIRABLE [a,b,c] | | [R5-7]a-c | [U1-2], [U1-3], [U1-4], [U1-5], [U2-1], [U2-2], [U3-1], [U3-6], [U4-1], [U4-3], [U5-3], [U5-5], [U5-7], [U6-3], [U7-2] | DESIRABLE [a,b,c] | | [R5-8] | [U1-4], [U3-5], [U4-7], [U5-7] | DESIRABLE | | [R5-9] | INTERNAL | DESIRABLE | | [R5-10] | [U3-5], [U4-7], [U5-7] | DESIRABLE | | [R5-11] | [U3-1], [U3-3], [U6-3] | DESIRABLE | | [R5-12] | [U1-3], [U2-2], [U3-3], [U4-7], [U5-7], [U6-1] | DESIRABLE | | [R6-1] | FORMAL, [U2-2], [U2-4], [U3-1], [U3-3], [U4-2], [U4-5], [U5-4], [U5-5], [U5-6], [U6-2], [U6-3], [U7-1] | ESSENTIAL | | [R6-2] | [U3-1], [U5-2], [U6-1] | ESSENTIAL | | [R6-3] | [U3-1], [U4-2], [U4-5], [U5-4], [U6-2] | ESSENTIAL | | [R6-4] | [U5-2] | ESSENTIAL | |--------|--------|-----------| | [R6-5] | [U7-2] | DESIRABLE | | [R6-6] | [U1-4], [U3-5], [U4-7], [U5-6], [U5-7], [U7-2] | ESSENTIAL | | [R6-7] | [U3-1], [U6-3] | DESIRABLE | | [R6-8] | [U1-2], [U1-5], [U3-1], [U4-3], [U5-3], [U6-1] | DESIRABLE [a,b,c,d,e,f,h] OPTIONAL [g] | | [R6-9] | [U3-5], [U4-7], [U5-7] | DESIRABLE | | [R6-10] | [U1-5], [U3-1], [U6-3] | DESIRABLE | | [R7-1] | FORMAL, [U2-2], [U2-4], [U3-1], [U3-3], [U5-5], [U6-3], [U7-1] | DESIRABLE | | [R7-2] | [U3-1], [U5-2], [U6-1] | ESSENTIAL | | [R7-3] | [U3-1], [U6-3] | DESIRABLE | | [R7-4] | [U2-4], [U3-1], [U6-3] | ESSENTIAL | | [R7-5] | [U2-4], [U3-1], [U6-3], [U5-6], [U7-1] | OPTIONAL [a,b,c] | | [R7-6] | [U3-5], [U4-7], [U5-7] | DESIRABLE | | [R7-7] | [U3-5], [U4-7], [U5-6], [U5-7], [U7-1] | OPTIONAL | | [R8-1] | FORMAL, [U5-6], [U4-7], [U5-7] | DESIRABLE | | [R8-2] | [U5-2] | DESIRABLE | | [R8-3] | [U1-4], [U3-5], [U4-7], [U5-7], [U6-1] | DESIRABLE [a,b,c,d] | | [R9-1] | FORMAL, [U5-6], [U4-7], [U5-7] | DESIRABLE | | [R9-2] | FORMAL, [U5-6], [U4-7], [U5-7] | DESIRABLE | | [R9-3] | [U2-1], [U6-1] | DESIRABLE | | [R10-1] | FORMAL, [U2-2], [U2-4], [U3-1], [U3-3], [U4-2], [U4-5], [U5-4], [U5-5], [U5-6], [U6-2], [U6-3], [U7-1] | DESIRABLE | | [R10-2] | [U3-1], [U5-2], [U6-1] | DESIRABLE | | [R10-3] | [U3-5], [U4-7], [U5-7] | DESIRABLE | | [R10-4] | [U2-4], [U3-1], [U3-3], [U3-5], [U4-1], [U4-7], [U5-1], [U5-7], [U7-2] | DESIRABLE [a,b,c,d] | | [R10-5] | [U2-5], [U3-1], [U6-3], [U5-6] | DESIRABLE [a,b,c] | | [R10-6] | INTERNAL, [U5-2] | DESIRABLE | | [R11-1] | FORMAL, INTERNAL | ESSENTIAL | | [R11-2] | INTERNAL | ESSENTIAL | | [R11-3] | INTERNAL | ESSENTIAL | | [R11-4] | FORMAL, INTERNAL | ESSENTIAL | | [R11-5] | INTERNAL | DESIRABLE | | [R11-6] | INTERNAL | DESIRABLE | ## 4.2 Data Enrichment and Characterization | Requirement | Type | Data | |-------------|------|------| | R12-1 | FORMAL | ESSENTIAL | | R12-2 | FORMAL | ESSENTIAL | | R12-3 | FORMAL | ESSENTIAL | | R12-4 | FORMAL | ESSENTIAL | | R12-5 | FORMAL | ESSENTIAL | | R12-6 | FORMAL, [U1-2], [U2-2], [U2-7], [U3-2] | DESIRABLE | | R12-7 | [U1-2], [U1-5] | DESIRABLE | | R12-8 | INTERNAL, [U1-1], [U1-2], [U1-3], [U1-5], [U6-1] | ESSENTIAL | | R12-9 | [U1-2], [U1-5], [U2-1], [U4-7], [U5-6], [U5-7], [U6-1] | DESIRABLE | | R12-10 | [U4-2], [U4-3], [U5-4], [U5-6] | DESIRABLE | | R12-11 | [U1-5], [U4-2], [U4-3], [U5-4], [U5-6] | DESIRABLE | | R12-12 | [U1-2], [U1-3], [U1-5], [U4-4], [U6-1] | DESIRABLE | | R12-13 | FORMAL, [U1-6], [U3-4], [U3-6], [U4-3], [U5-6], [U6-1] | DESIRABLE | | R12-14 | [U1-6], [U3-4], [U3-6], [U4-3], [U6-1] | DESIRABLE | | R12-15 | FORMAL, [U1-6], [U3-4], [U3-6], [U4-3], [U6-1] | DESIRABLE | | R12-16 | FORMAL, [U1-6], [U3-4], [U3-6], [U4-3], [U6-1] | DESIRABLE | | R12-17 | FORMAL, [U1-6], [U3-4], [U3-6], [U4-3], [U6-1] | DESIRABLE | | R12-18 | FORMAL, [U1-6], [U3-4], [U3-6], [U4-3], [U6-1] | DESIRABLE | ## 4.3 Threat Intelligence | Requirement | Type | Data | |-------------|------|------| | R13-1 | FORMAL | ESSENTIAL | | R13-2 | FORMAL | ESSENTIAL | | R13-3 | FORMAL | ESSENTIAL | | R13-4 | FORMAL | ESSENTIAL | | R13-5 | FORMAL | ESSENTIAL | | R13-6 | FORMAL | ESSENTIAL | | R13-7 | [U1-5], [U1-6], [U3-4], [U3-6], [U4-3], [U6-1] | ESSENTIAL | | R13-8 | [U1-5], [U1-6], [U3-4], [U3-6], [U4-3], [U6-1] | DESIRABLE | | R13-9 | [U1-5], [U1-6], [U3-4], [U3-6], [U4-3], [U6-1] | DESIRABLE | | [R13-10] | [U1-5], [U1-6], [U3-1], [U3-4], [U3-6], [U4-1], [U4-3], [U4-5], [U6-1], [U6-3] | DESIRABLE | | [R13-11] | INTERNAL | DESIRABLE | | [R13-12] | [U1-5], [U1-6], [U3-4], [U3-6], [U4-3], [U6-1] | OPTIONAL | | [R13-13] | [U1-5], [U1-6], [U3-4], [U3-6], [U4-3], [U6-1] | DESIRABLE | | [R13-14] | [U1-5], [U1-6], [U3-1], [U3-4], [U3-6], [U4-1], [U4-3], [U4-5], [U6-1], [U6-3] | DESIRABLE | | [R13-15] | [U1-5], [U1-6], [U3-5], [U3-6], [U4-7], [U5-6], [U5-7], [U6-1] | DESIRABLE | ### 4.4 Data Output | [R14-1] | [U2-1], [U2-4], [U3-3], [U4-7], [U5-7], [U7-1], [U7-2] | DESIRABLE | | [R14-2] | [U2-1], [U2-4], [U3-3], [U4-7], [U5-7], [U7-1], [U7-2] | DESIRABLE | | [R14-3] | [U2-1], [U2-4], [U3-3], [U4-7], [U5-7], [U7-1], [U7-2] | OPTIONAL | | [R14-4] | [U2-1], [U2-4], [U3-3], [U4-7], [U5-7], [U7-1], [U7-2] | DESIRABLE | | [R14-5] | INTERNAL | OPTIONAL | | [R14-6] | [U1-6], [U3-6], [U4-3] | DESIRABLE | | [R14-7] | [U1-6], [U3-6], [U4-3] | DESIRABLE | | [R14-8] | [U1-6], [U3-6], [U4-3] | DESIRABLE | | [R14-9] | [U1-6], [U3-6], [U4-3] | DESIRABLE | | [R14-10] | [U1-6], [U3-6], [U4-3] | DESIRABLE | | [R14-11] | [U1-6], [U3-6], [U4-3] | OPTIONAL | | [R15-1] | INTERNAL | DESIRABLE | | [R15-2] | INTERNAL | ESSENTIAL | | [R15-3] | [U3-5], [U4-7], [U5-7] | DESIRABLE | | [R15-4] | [U3-5], [U4-7], [U5-7] | DESIRABLE | | [R16-1] | [U1-2], [U1-3], [U2-1], [U2-2], [U3-2], [U4-4], [U7-3], [U7-4], [U7-5] | ESSENTIAL | | [R16-2] | [U1-2], [U1-3], [U2-1], [U2-2], [U3-2], [U4-4], [U7-3], [U7-4], [U7-5] | DESIRABLE | | [R16-3] | [U1-2], [U1-3], [U2-1], [U2-2], [U3-2], [U4-4], [U7-3], [U7-4], [U7-5] | DESIRABLE | | [R16-4] | [U1-2], [U1-3], [U2-1], [U2-2], [U3-2], [U4-4], [U7-3], [U7-4], [U7-5] | OPTIONAL | | [R16-5] | [U1-2], [U1-3], [U2-1], [U2-2], [U3-2], [U4-4], [U7-3], [U7-4], [U7-5] | OPTIONAL | | Requirement | Description | Priority | |-------------|-------------|----------| | [R16-6] | [U1-2], [U1-3], [U2-1], [U2-2], [U3-2], [U4-4], [U7-3], [U7-4], [U7-5] | DESIRABLE | | [R17-1] | [U2-1] | ESSENTIAL | | [R17-2] | [U2-1] | DESIRABLE | ### 5. NON-FUNCTIONAL REQUIREMENTS | Requirement | Description | Priority | |-------------|-------------|----------| | [R18-1] | INTERNAL | DESIRABLE | | [R18-2] | INTERNAL | DESIRABLE | | [R18-3] | INTERNAL | DESIRABLE | | [R19-1] | INTERNAL | DESIRABLE | | [R19-2] | INTERNAL | OPTIONAL | | [R19-3] | INTERNAL | DESIRABLE | | [R20-1] | INTERNAL | DESIRABLE | | [R20-2] | INTERNAL | DESIRABLE | | [R20-3] | INTERNAL | DESIRABLE | | [R20-4] | INTERNAL | OPTIONAL | | [R20-5] | INTERNAL | OPTIONAL | | [R21-1] | INTERNAL | DESIRABLE | | [R21-2] | INTERNAL | ESSENTIAL | | [R21-3] | INTERNAL | DESIRABLE | | [R21-4] | INTERNAL | DESIRABLE | | [R21-5] | INTERNAL | DESIRABLE | | [R21-6] | INTERNAL | ESSENTIAL | | [R22-1] | INTERNAL | ESSENTIAL | | [R22-2] | INTERNAL | ESSENTIAL | | [R22-3] | INTERNAL | ESSENTIAL | | [R22-4] | INTERNAL | ESSENTIAL | | [R22-5] | INTERNAL | ESSENTIAL | | [R22-6] | INTERNAL, [U7-1], [U7-2], [U7-3], [U7-4], [U7-5] | DESIRABLE | | [R23-1] | INTERNAL | DESIRABLE | | [R23-2] | INTERNAL | DESIRABLE | | [R23-3] | INTERNAL | OPTIONAL | | [R23-4] | INTERNAL | DESIRABLE | | [R23-5] | INTERNAL | DESIRABLE | | [R24-1] | FORMAL, INTERNAL | DESIRABLE | | Requirement | Type | Priority | |-------------|------------|----------| | [R24-2] | INTERNAL | DESIRABLE| | [R24-3] | INTERNAL | DESIRABLE| | [R24-4] | INTERNAL | DESIRABLE| 6. USER INTERFACE 6.1 API Design | Requirement | Type | Priority | |-------------|------------|----------| | [R25-1] | INTERNAL | ESSENTIAL| | [R25-2] | FORMAL, INTERNAL | ESSENTIAL| | [R25-3] | INTERNAL | DESIRABLE| | [R25-4] | INTERNAL | DESIRABLE| | [R25-5] | INTERNAL | OPTIONAL | 6.2 Data Displaying and Graphical Visualization | Requirement | Type | Priority | |-------------|------------|----------| | [R26-1] | INTERNAL | ESSENTIAL| | [R26-2] | INTERNAL | DESIRABLE| | [R26-3] | INTERNAL | OPTIONAL| | [R26-4] | INTERNAL | OPTIONAL| | [R26-5] | INTERNAL | DESIRABLE| | [R26-6] | INTERNAL | DESIRABLE| | [R26-7] | INTERNAL | DESIRABLE| | [R26-8] | INTERNAL | DESIRABLE| | [R26-9] | INTERNAL | DESIRABLE [b,c,d,h] OPTIONAL [a,e,f,g]| | [R26-10] | INTERNAL | DESIRABLE| | [R26-11] | INTERNAL | DESIRABLE| | [R26-12] | INTERNAL | OPTIONAL | 7. TESTING AND EVALUATION | Requirement | Type | Priority | |-------------|------------|----------| | [R27-1] | FORMAL | ESSENTIAL [a,b]| | [R27-2] | FORMAL | ESSENTIAL [a,b]| | [R27-3] | FORMAL | ESSENTIAL [a,b]| | [R27-4] | INTERNAL | DESIRABLE| | [R27-5] | INTERNAL | DESIRABLE| | [R27-6] | INTERNAL | DESIRABLE| | [R27-7] | INTERNAL | DESIRABLE| | [R27-8] | INTERNAL | DESIRABLE| | [R27-9] | INTERNAL | DESIRABLE| | [R27-10] | INTERNAL | DESIRABLE| | [R27-11] | INTERNAL | DESIRABLE | | [R27-12] | INTERNAL | DESIRABLE | | [R27-13] | INTERNAL | DESIRABLE | | [R27-14] | INTERNAL | DESIRABLE | | [R27-15] | INTERNAL | DESIRABLE | | [R27-16] | INTERNAL | OPTIONAL | | [R27-17] | INTERNAL | DESIRABLE | | [R27-18] | INTERNAL | DESIRABLE | ### 8. CONFIGURATION MANAGEMENT | [R28-1] | INTERNAL | DESIRABLE | | [R28-2] | INTERNAL | OPTIONAL | | [R28-3] | INTERNAL | ESSENTIAL | | [R28-4] | INTERNAL | ESSENTIAL | | [R28-5] | INTERNAL | DESIRABLE | | [R28-6] | INTERNAL | OPTIONAL | ### REFERENCES [1] Anubis (Analyzing Unknown Binaries), http://analysis.seclab.tuwien.ac.at/features.php [2] ARAKIS, http://arakis.cert.pl/en/index.html [3] Argos, http://www.few.vu.nl/argos/ [4] DeepSight Early Warning Services, Symantec, https://tms.symantec.com/Default.aspx [5] DoW, WOMBAT - Description of Work [6] Honey@Home, http://www.honeyathome.org/ [7] G. Jacob, E. Filiol and H. Debar: *Functional Polymorphic Engines: Formalisation, Implementation and Use Cases*, In Proceedings of the EICAR conference (Coming version in Journal of Computer Virology), 2008. [8] Leurrecom.org Honeypot Project, Institut Eurécom, http://www.leurrecom.org/ [9] Virustotal, http://www.virustotal.com/
AgGPS® Autopilot Base Station Getting Started Guide Version 1.00 Revision A Part Number 50761-50-ENG March 2004 Contact Information Trimble Navigation Limited Agriculture Business Area 9290 Bond Street, Suite 102 Overland Park, KS 66214 U.S.A +1-913-495-2700 Phone firstname.lastname@example.org www.trimble.com Copyright and Trademarks © 2003, Trimble Navigation Limited. All rights reserved. Trimble, the Globe & Triangle logo, the Sextant logo with Trimble, AgGPS, MS750 and SiteNet are trademarks of Trimble Navigation Limited, registered in the United States Patent and Trademark Office and other countries. Zephyr Geodetic is a trademark of Trimble Navigation Limited. Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. All other trademarks are the property of their respective owners. Release Notice This is the March 2004 release (Revision A) of the AgGPS Autopilot Base Station Getting Started Guide, part number S0761-50-ENG. It applies to version 1.00 of the AgGPS Autopilot Base Station receiver. The following limited warranties give you specific legal rights. You may have others, which vary from state/jurisdiction to state/jurisdiction. Product Limited Warranty Trimble warrants that this Trimble product and its internal components (the “Product”) shall be free from defects in materials and workmanship and will substantially conform to Trimble’s applicable published specifications for the Product for a period of one (1) year, starting from the earlier of (i) the date of installation, or (ii) six (6) months from the date of product shipment from Trimble. This warranty applies only to the Product if installed by Trimble or a distributor authorized by Trimble to perform Product installation services. Software Components and Enhancements All Product software components (sometimes hereinafter also referred to as “Software”) are licensed and not sold. Any Software not covered by a separate End User License Agreement (“EULA”) shall be governed by the terms, conditions, restrictions and limited warranty terms of such EULA notwithstanding the preceding paragraph. During the limited warranty period you will be entitled to receive, at no additional charge, such Fix Updates and Minor Updates to the Product software as Trimble may develop for general release, subject to the procedures for delivery to purchasers of Trimble products generally. If you have purchased the Product from an authorized Trimble distributor rather than from Trimble directly, Trimble may, at its option, forward the software Fix Update or Minor Update to the Trimble distributor for final distribution to you. Trimble reserves the right to reject, or substantially new software releases, as identified by Trimble are expressly excused from this enhancement process and limited warranty. Receipt of software updates shall not serve to extend the limited warranty period. For purposes of this warranty the following definitions shall apply: (1) “Fix Update” means an error correction or other update created to fix a previous software version that does not substantially conform to its published specifications; (2) “Minor Update” occurs when enhancements are made to current features in a software program; and (3) “Major Upgrade” occurs when significant new features are added to software, or when a new product containing new features replaces the further development of a current product line. Trimble reserves the right to determine, in its sole discretion, what constitutes a significant new feature and Major Upgrade. Warranty Remedies Trimble’s sole liability and your exclusive remedy under the warranties set forth above shall be, at Trimble’s option, to repair or replace any Product that fails to conform to such warranty (“Nonconforming Product”), and/or issue a cash refund up to the purchase price paid by you for any such Nonconforming Product, excluding costs of installation, upon your return of the Nonconforming Product to Trimble in accordance with Trimble’s standard return material authorization process. Such remedy may include reimbursement of the cost of repairs, for damage to third-party equipment onto which the Product is installed, if such damage is found to be directly caused by the Product as reasonably determined by Trimble following a root cause analysis. Warranty Exclusions and Disclaimer These warranties shall be applied only in the event and to the extent that (i) the Products and Software are properly and correctly installed, configured, interfaced, maintained, stored, and operated in accordance with Trimble’s relevant operator’s manual and specifications, and; (ii) the Products and Software are not modified or misused. The preceding warranties shall not apply to, and Trimble shall not be responsible for defects or performance problems resulting from (i) the combination or utilization of the Product or Software with hardware or software products, information, data, systems, interfaces or devices not made, supplied or specified by Trimble; (ii) the operation of the Product or Software under any specification other than, or in addition to, Trimble’s standard specifications for its products; (iii) the unauthorized, installation, modification, or use of the Product or Software; (iv) damage caused by accident, lightning or other electrical discharge, fresh or salt water immersion or spray; or (v) normal wear and tear on consumable parts (e.g., batteries). Trimble does not warrant or guarantee the results obtained through the use of the Product. THE WARRANTIES ABOVE STATE TRIMBLE’S ENTIRE LIABILITY, AND YOUR EXCLUSIVE REMEDIES, RELATING TO PERFORMANCE OF THE PRODUCTS AND SOFTWARE. EXCEPT AS OTHERWISE EXPRESSLY PROVIDED HEREIN, THE PRODUCTS, SOFTWARE, AND ACCOMPANYING DOCUMENTATION AND MATERIALS ARE PROVIDED “AS-IS” AND WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND BY EITHER TRIMBLE NAVIGATION LIMITED OR ANYONE WHO HAS BEEN INVOLVED IN ITS CREATION, PRODUCTION, INSTALLATION, OR DISTRIBUTION INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, TITLE, AND NONINFRINGEMENT. THE STATED EXPRESS WARRANTIES ARE IN LIEU OF ALL OBLIGATIONS OR LIABILITIES ON THE PART OF TRIMBLE ARISING OUT OF, OR IN CONNECTION WITH, ANY PRODUCTS OR SOFTWARE. SOME STATES AND JURISDICTIONS DO NOT ALLOW LIMITATIONS ON DURATION OR THE EXCLUSION OF AN IMPLIED WARRANTY, SO THE ABOVE LIMITATION MAY NOT APPLY TO YOU. TRIMBLE NAVIGATION LIMITED IS NOT RESPONSIBLE FOR THE OPERATION OR FAILURE OF OPERATION OF GPS SATELLITES OR THE AVAILABILITY OF GPS SATELLITE SIGNALS. **Limitation of Liability** TRIMBLE’S ENTIRE LIABILITY UNDER ANY PROVISION HEREIN SHALL BE LIMITED TO THE AMOUNT PAID BY YOU FOR THE PRODUCT OR SOFTWARE LICENSE. TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT SHALL TRIMBLE OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES WHATSOEVER UNDER ANY CIRCUMSTANCE OR LEGAL THEORY RELATING IN ANY WAY TO THE PRODUCTS, SOFTWARE AND ACCOMPANYING DOCUMENTATION AND MATERIALS, (INCLUDING, WITHOUT LIMITATION, DAMAGES FOR LOSS OF BUSINESS PROFITS, BUSINESS INTERRUPTION, LOSS OF BUSINESS INFORMATION, OR ANY OTHER PECUNIARY LOSS), REGARDLESS WHETHER TRIMBLE HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH LOSS AND REGARDLESS OF THE COURSE OF DEALING WHICH DEVELOPS OR HAS DEVELOPED BETWEEN YOU AND TRIMBLE. BECAUSE SOME STATES AND JURISDICTIONS DO NOT ALLOW THE EXCLUSION OR LIMITATION OF LIABILITY FOR CONSEQUENTIAL OR INCIDENTAL DAMAGES, THE ABOVE LIMITATION MAY NOT APPLY TO YOU. NOTE: THE ABOVE LIMITED WARRANTY PROVISIONS MAY NOT APPLY TO PRODUCTS OR SOFTWARE PURCHASED IN THE EUROPEAN UNION. PLEASE CONTACT YOUR TRIMBLE DEALER FOR APPLICABLE WARRANTY INFORMATION. **Notices** Class B Statement – Notice to Users. This equipment has been tested and found to comply with the limits for a Class B digital device, pursuant to Part 15 of the FCC rules. These limits are designed to provide reasonable protection against harmful interference in a residential installation. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instructions, may cause harmful interference to radio communication. However, there is no guarantee that interference will not occur in a particular installation. If this equipment does cause harmful interference to radio or television reception, which can be determined by turning the equipment off and on, the user is encouraged to try to correct the interference by one or more of the following measures: – Reorient or relocate the receiving antenna. – Increase the separation between the equipment and the receiver. – Connect the equipment into an outlet on a circuit different from that to which the receiver is connected. – Consult the dealer or an experienced radio/TV technician for help. Changes and modifications not expressly approved by the manufacturer or registrant of this equipment can void your authority to operate this equipment under Federal Communications Commission rules. ## Contents 1 Introduction ........................................ 1 Welcome ........................................... 2 About the Product .................................. 2 Related Information ................................. 2 Technical Assistance ............................... 3 Your Comments ..................................... 3 2 Configuring the Base Station Case for Use ............ 5 Introduction ........................................ 6 Setting up the Base Station Case ..................... 6 3 Autopilot Base Station Display Mode .................. 11 Introduction ........................................ 12 Adjusting the contrast ............................ 12 Configuring the MS750 Receiver ...................... 13 4 Setting up the Mobile Autopilot Base Station .......... 15 Introduction ........................................ 16 Setting up the Base Station on a Reference Mark .... 16 5 Configuring and Starting an Autopilot Base Station .... 21 Introduction ........................................ 22 Configuring the Base Station ........................ 22 Entering the base station antenna type and height .. 23 Setting the location of the base station ........... 25 Entering the name and activating the ID of the base station ... 29 Turning on the Use For Autobase function ........... 31 6 Working with Application Files ........................................ 33 Introduction .................................................................. 34 Activating an Application File ..................................... 34 Saving an Application File ......................................... 35 Deleting an Application File ....................................... 36 7 Configuring the SiteNet 900 Radio .................................... 39 Introduction .................................................................. 40 Configuring the SiteNet 900 Radio ................................ 40 A Express Autobase ......................................................... 47 Introduction .................................................................. 48 Configuring a New Base Station ................................... 50 Entering the GPS antenna height ......................... 51 Establishing a location .......................................... 52 Setting the base name ........................................... 53 Saving the base station settings ............................ 53 Using the Express Autobase Function ......................... 54 Display Mode Menu ............................................... 63 File Management Menu .......................................... 64 Introduction In this chapter: - Welcome - About the Product - Related Information - Technical Assistance - Your Comments Welcome This manual describes how to set up and use the AgGPS® Autopilot Base Station. Even if you have used other Global Positioning System (GPS) products before, Trimble recommends that you spend some time reading this manual to learn about the special features of this product. If you are not familiar with GPS, visit the Trimble website (www.trimble.com) for an interactive look at Trimble and GPS. This publication assumes that you know how to use the Microsoft® Windows® operating system. About the Product The AgGPS Autopilot Base Station is a mobile, all-in-one case, base receiver and base radio. The all-in-one case allows fast, efficient setup because most of the equipment can be kept in the case during operation. Unlike the old base station, which required two tripods, the new AgGPS Autopilot Base Station has a single tripod and a remote radio antenna, which allows you to leave the radio and receiver in the case and connected at all times. Related Information Related information is found in the update notes. A warranty activation sheet is included with this product. Send it in to automatically receive update notes containing important information about software and hardware changes. Contact your local Trimble dealer for more information about the support agreement contracts for software and firmware, and an extended warranty program for hardware. Technical Assistance If you have a problem and cannot find the information you need in the product documentation, contact your local dealer. Alternatively, do one of the following: - Request technical support using the Trimble website at www.trimble.com/support.html - Send an e-mail to email@example.com Your Comments Your feedback about the supporting documentation helps us to improve it with each revision. E-mail your comments to firstname.lastname@example.org. 1 Introduction Configuring the Base Station Case for Use In this chapter: - Introduction - Setting up the Base Station Case Introduction This chapter describes how to configure the AgGPS Autopilot base station case for access to the cabling and for ease-of-use. It is important to place the cables in such a way so that the cable that needs to be removed from the case first is on top. Setting up the Base Station Case 1. Check the contents of the base station case. See Table 2.1. Make sure all components of the base station are in the AgGPS Autopilot base station case. The components are shipped as P/N 47733-00 and P/N 47733-90. The components from these two assemblies are placed into the case. | Qty | Part number | Description | |-----|----------------|--------------------------------------------------| | 1 | 47772-00 | Rugged base station case | | 1 | 36487-02 | MS750™ receiver | | 1 | 39395-90 | SiteNet™ 900 radio (no antenna) | | 1 | 41249-00 | Zephyr Geodetic™ GPS antenna | | 1 | 22882-10 | Radio antennas: 0 dB, 3 dB, and 5 dB, and base | | 1 | 46740 | NMO to N(F) adaptor for SiteNet radio | | 1 | 49102 | Bracket for remote radio antenna | | 1 | 49828 | Antenna mount for bracket | | 2 | 50017 | Extension pole (25 cm) | | 1 | 47019-05 | GPS antenna cable TNC (M) – N (M) | | 1 | 50751-10 | Remote antenna cable LMR240 | | 1 | 38968-01 | MS750 to SiteNet 900 cable | Table 2.1 Contents of the AgGPS Autopilot base station case (continued) | Qty | Part number | Description | |-----|---------------|--------------------------------------------------| | 1 | 44087-10 | MS750 to battery power cable | | 1 | 50761-00-ENG | *AgGPS Autopilot Base Station Quick Reference Card* | Table 2.2 shows the items that are part of the AgGPS Autopilot base station, but that are not required in the field. Table 2.2 Additional contents of the AgGPS Autopilot base station | Qty | Part number | Description | |-----|---------------|--------------------------------------------------| | 1 | 40868-03-ENG | *MS Series Manual* | | 1 | 45960-01-ENG | *SiteNet 900 Manual* | | 1 | 45502-03 | MS750 cable for office use | | 1 | 43249-20 | MS750 CD-ROM | | 1 | 40945-10 | COMMSET software disk for SiteNet 900 | | 1 | 50761-50-ENG | *Getting Started Guide* (this document) | 2. Attach the remote antenna mount to the remote antenna bracket and the NMO to N(F) adaptor to the SiteNet radio. a. Attach the antenna mount for bracket (P/N 49828) to the narrow end of the bracket for the remote radio antenna (P/N 49102) as shown in Figure 2.1. Place the rubber gasket from the radio antennas (0 dB, 3 dB and 5 dB) and the base (P/N 22882-10) between the NMO connector side of the mount and the bracket. The NMO connector side of the mount should be on the side of the bracket without the round cutout, that is, it should be at the opposite end of the bracket. b. Attach the NMO to N(F) adaptor for SiteNet radio (P/N 46740) to the SiteNet radio (P/N 39395-90) as shown in Figure 2.2. 3. Place the equipment in the base station case for transport or storage as shown in Figure 2.3. c. Place the MS750 receiver (P/N 36487-02) and SiteNet 900 radio (P/N 39395-90) in the bottom of the case. d. Place the Zephyr Geodetic antenna (P/N 41249-00), 3-dB radio antenna (medium length whip antenna from P/N 22882-10), and radio antenna base (from P/N 22882-10) in the top of the case. e. Place the 0-dB radio antenna (shortest whip antenna) and Allen wrench (from P/N 22882-10) into the foam in the upper left corner of the bottom of the case. f. Place the bracket for the remote radio antenna with antenna mount for bracket (as assembled in step 2 of this procedure) in the upper part of the bottom of the case between the two pieces of foam. g. Place all cables in the oval shaped cutout on the bottom of the case in the following order: - MS750 to SiteNet 900 cable (P/N 38968-01) Connect the SiteNet 900 connector (Bendix) to the radio and the 12-pin ConnexALL to the MS750 port A (A). The cable can stay connected. *Note – The 12-pin ConnexALL is keyed at 12 o’clock on this cable. Typically, Trimble ConnexALLs are keyed at 2 o’clock.* - GPS antenna cable TNC (M) – N (M) (P/N 47019-05) Connect the N (M) connector to the MS750 antenna connector (ANT). Wrap the cable around the oval cutout. The cable can stay connected to the MS750. - MS750 to battery power cable (P/N 44087-10) Wrap the cable around the oval cutout. The cable is not connected to the MS750 when it is stored. - Remote radio antenna cable LMR240 (P/N 50751-10) Wrap the cable around the oval cutout. The cable is not connected to the SiteNet 900 when it is stored. h. Place the two 25 cm extension poles in the cut out area to the right of the MS750 and the SiteNet 900. i. Place the 5-dB radio antenna (longest whip antenna) diagonally across the bottom of the case. j. Place the *AgGPS Autopilot Base Station Quick Reference Guide* (P/N 50761-00-ENG) in the case. Autopilot Base Station Display Mode In this chapter: - Introduction - Configuring the MS750 Receiver Introduction This chapter describes how to configure the MS750 to show only the screens that are required to set up the Autopilot Base Station for operation. Typically, you only need to perform this procedure once, just before you configure the base station for the first time. However, you will also need to set the Autopilot Base Station after you have loaded new firmware onto the MS750 receiver. Adjusting the contrast If the MS750 receiver powers on and only a blank screen appears, the contrast setting could be set too low. Also, the LCD display can become difficult to read as lighting conditions change. To increase or decrease contrast: 1. In the *MS750 Home* screen, press \[ \]. The top right square flashes. 2. Press \[ \] to increase contrast or press \[ \] to decrease contrast. 3. Press \[ \] to accept contrast. Configuring the MS750 Receiver **Tip** – To go directly to the Home screen from anywhere in the MS750 menu system, press $\text{A}$ and $\text{V}$ simultaneously one or more times. 1. In the *MS750 Home* screen, press $\text{>}$ until the *Config Menus* screen displays: 2. Press $\text{V}$ to display the *Display Mode* screen. 3. Press $\text{V}$ again to enter the *Set Display Mode* screen. The *Set Display Mode* screen displays: a. Press $\text{>}$ to activate the display mode selection. b. Press $\text{V}$ until *CMR RTK Base* mode displays. c. Press $\text{-}$ to accept the *CMR RTK Base* mode. The flashing cursor disappears. d. Press $\text{V}$ then $\text{-}$ to exit the *Display Mode* screen. The MS750 receiver is now set to the default parameters and will display only the *Autopilot Base Station* screen. **Tip** – If you need access to all screens in the MS750 receiver, complete this procedure, but in Step 3b press $\text{V}$ *Custom* displays. 3 Autopilot Base Station Display Mode Setting up the Mobile Autopilot Base Station In this chapter: - Introduction - Setting up the Base Station on a Reference Mark Introduction This chapter describes how to set up the base station on a reference mark using a fixed height tripod (P/N 28959-00) or a variable height tripod (wooden leg) with tribrach and adaptor (P/Ns 12178, 12179 and 12180). Setting up the Base Station on a Reference Mark 1. Extend the tripod center post and then extend the legs to the required height. Note – Do not fully extend the wooden tripod leg as you may need the extra length adjustment of the leg when you set up the tripod. In this step, it is important to make sure the GPS antenna is higher than any objects or people in the vicinity of the base station. 2. Position the tripod over the reference mark. Read all procedures in this step for your tripod type before you begin to plumb and level the tripod. Do one of the following: Fixed height a. When you use a fixed height tripod, place the tip of the center post on the reference mark and adjust the legs to level the tripod. Tip – Make sure the center post stays on the mark while you level the tripod by loosening the lock screw on the third leg before you attempt to level the tripod. Leave the lock screw loose until you are completely finished with leveling and plumbing the tripod. b. Drive the tripod legs firmly into the ground with the foot pegs. c. Center the bubble level on the tripod center leg by carefully adjusting the two legs with the quick release adjustable handles. d. Tighten the lock screw on the third leg. Check the plumb and level. The tip should be on the center of the reference mark. Variable height a. When you use the variable height tripod, tighten the tribrach to the tripod head. b. Position the tripod roughly over the reference mark. c. While you look through the optical plummet sight (on the side of the tribrach), move the cross-hairs or circle as close to the mark as possible (this does not need to be exactly over the mark at this point) by lifting the entire tripod up and moving the tripod. Tip – When you move the tripod, ensure that you keep the head of the tripod as level as possible. d. Drive the tripod legs firmly into the ground with the foot pegs. e. Loosen the tribrach just enough to allow movement of the tribrach on the head of the tripod. f. Look through the optical plummet sight, and move the cross-hairs or circle exactly over the mark. g. Tighten down the tribrach and check that you are still centered over the mark. h. Level the tribrach by adjusting the legs up or down on the tripod. 3. Assemble the GPS antenna to the range pole. Place the radio bracket between the bottom of the range pole and the tribrach adaptor plug (removable brass plug at top of tripod, fixed height) or in the tribrach adaptor (variable height). Tighten the tribrach adaptor plug to the range pole with the radio bracket between the plug and range pole. Attach the radio antenna to the antenna mount on the radio antenna bracket. 4. Carefully place the GPS antenna with the radio antenna bracket on the tripod. Once they are in place, check the level and plumb. 5. Remove the remote radio antenna cable LMR240 (P/N 50751-10) and the MS750 to battery power cable (P/N 44087-10) from the case. The MS750 to SiteNet 900 cable (P/N 38968-01) should already be connected, as described in step 3 of Setting up the Base Station Case, page 5. 6. Remove the loose end of the GPS antenna cable TNC (M) - N (M) (P/N 47019-05) from the case. Feed the TNC type connector end of the cable through the access port on the base station case and connect it to the Zephyr geodetic antenna TNC-type connector. 7. Connect one end of the remote radio antenna cable LMR240 (P/N 50751-10) to the radio antenna mount on the bracket. Feed the other end of the cable through the access port and connect it to the SiteNet radio adaptor (P/N 46740). 8. Feed the TA-3 (F) end of the MS750 to the battery power cable (P/N 44087-10) through the access port and connect it to the TA-3 (M) connector on the MS750 to SiteNet 900 cable (P/N 38968-01). 9. Connect the alligator clips to the power source. Red to positive (+), black to negative (-). 10. Check to see if the MS750 (front display) and SiteNet (LED on bottom of radio) both have power. 11. Configure and start the AgGPS Autopilot base station. For more information, see Configuring the Base Station, page 21. Setting up the Mobile Autopilot Base Station Configuring and Starting an Autopilot Base Station In this chapter: - Introduction - Configuring the Base Station Introduction This chapter describes the required parameter settings for configuring a base in the MS750 receiver and setting the Autobase function. Note – If you have already configured your receiver and turned on Autobase for the station you are currently occupying, simply apply power to the receiver. The Autobase function should configure the base station and begin transmitting RTK data to your rover. If the Autobase does not work properly, see the troubleshooting table in the AgGPS Autopilot Base Station Quick Reference Guide. If you need to adjust the contrast of the LCD display, see Adjusting the contrast, page 12. Configuring the Base Station To configure the AgGPS Autopilot base station, you need to complete the following procedures: - Entering the base station antenna type and height, page 23 - Setting the location of the base station, page 25 - Entering the name and activating the ID of the base station, page 29 - Turning on the Use For Autobase function, page 31 - Saving an Application File, page 35. Tip – To go directly to the Home screen from anywhere in the MS750 menu system, press \[A\] and \[V\] simultaneously one or more times. Entering the base station antenna type and height 1. In the receiver *Home* screen, press > until the *Config Menus* screen appears. ![Image](image) *Note – If your receiver has existing base station positions (through using Autobase), go to Turning on the Use For Autobase function, page 31.* 2. Press ✓ until the *Display Mode* screen appears. 3. Press > until the *Config Base Stn* screen appears. You can now start configuring the base station. 4. Press ✓ to display the *Antenna Type* screen. 5. Press > to activate the antenna type selection. Press ✓ until your antenna type appears. *Note – For a new AgGPS Autopilot base station, the correct antenna type is Zephyr Geodetic (P/N 41249-00). For an old AgGPS Autopilot base station, the correct antenna type is uCentered 13" GP (P/N 36569-00).* 6. To accept the antenna type, press = then ✓. The *Base Ant Ht* screen appears. 7. To activate the *Base Antenna Ht* field, press $\gg$. a. Place the antenna on the reference mark and measure the height of the antenna. b. Enter the antenna height in the required units (meters or feet) as previously configured. To do this, press $\gg$ to move the cursor to the right. If you press $\gg$ when the cursor is on the last character, the cursor moves back to the first character. Press $\wedge$ or $\vee$ to change the value of each number. **Fixed height** To determine the exact height of the GPS antenna (new base station), enter the extended height of the center pole, plus the added height of the radio antenna bracket (0.027 m or 0.088 ft.) and the range pole extension (0.250 m or 0.823 ft.). **Variable height** To determine the exact height of the GPS antenna (new base station), enter the height of the tripod, plus the added height of the radio antenna bracket (0.027 m or 0.088 ft.) and the range pole extension (0.250 m or 0.823 ft.). *Note – When you use the old base station system without the radio antenna bracket and range pole extension, enter the extended height of the center pole (fixed height) or height of tripod (variable height).* c. To accept the base antenna height, press $\square$. Setting the location of the base station Do one of the following: - If you know the coordinates of the reference mark, use the Edit Base Position method, page 25. - If you do not know the coordinates of the reference mark, use the Set From Avg method, page 26. Edit Base Position method Use this method to enter or edit the current coordinates (latitude, longitude, and height) of the MS750 receiver to the known coordinates of the reference mark. The reference mark has been surveyed, previously set by averaging, or the coordinates have been determined by another method. Trimble strongly recommends that you have all base coordinates surveyed when you use multiple base location across a large area or farm. There are several ways to survey or make the positions relative to each other. For more details, contact your local Trimble dealer. With the base coordinates relative to each other, you can move from one field to another and use any base location within the area, whether or not that base was used to set the A-B line of the field. 1. In the Config Base Stn screen, press until the Base Location – Edit Base Pos screen appears. 2. Press . The Base Latitude screen appears. 3. Enter the latitude in degrees, minutes, and seconds. Then enter the hemisphere, N or S. (The default hemisphere value is N - North.) To accept the base latitude, press . The Base Longitude screen appears. 4. Enter the longitude in degrees, minutes, and seconds. Then enter the hemisphere, W or E. (The default hemisphere value is E - East.) To accept the base longitude, press \(\text{OK}\). The *Base WGS Height* screen appears. 5. Enter the sign (+/-) and the height in configured units. To accept the base height, press \(\text{OK}\). **Caution** – You should enter the *ellipsoid height* of the reference mark, not the *elevation*. 6. When the *Base Location Accept position* screen appears, do one of the following: a. To *accept* the base position, press \(\text{OK}\), the *Base Location – Edit Base Pos* screen appears. b. To *reject* the base position, press \(\text{OK}\). The *Reject position* screen appears. Then press \(\text{OK}\) and the *Base Location – Edit Base Pos* screen appears. 7. If you need to edit the position again, press \(\text{OK}\). If you accepted the position, press \(\text{OK}\) twice. The *Base Name* screen appears. **Set From Avg method** Use this method if you do not know the coordinates for the reference mark. With *Set From Avg*, you can average positions from the MS750 receiver over a period of time to establish reference mark coordinates. Whether or not they are averaged, the resulting autonomous GPS coordinates (latitude, longitude, and height) can contain large errors in relation to the true reference mark or to another base location in your area. If you use this method, make sure that you enter the base antenna height first (see Entering the base station antenna type and height, page 23, Step 7). 1. In the *Config Base Stn* screen, press $\blacktriangledown$ until the *Base Location – Set From Avg* screen appears: 2. Press $\blacktriangledown$. The *Base Location Averaging* screen appears: The screen displays each measurement (1-second epoch) as it is counted. The longer that you average the autonomous position, the more accurate the base location will be. Trimble recommends that you average the position for at least 60 measurements (60 seconds). 3. Once the required number of measurements are made, press $\blacktriangledown$ to stop the averaging and display the position. The *Base Latitude* screen appears. 4. Write down the latitude (degrees, minutes, and seconds), and the hemisphere designation (N or S). **Tip** – Record the latitude, longitude, and height of all your base locations for future use. 5. To accept the base latitude, press $\blacktriangledown$. The *Base Longitude* screen appears. 6. Write down the longitude (degrees, minutes, and seconds), and the hemisphere designator (W or E). To accept the base longitude, press \(\text{OK}\). 7. The *Base WGS Height* screen appears. Write down the height value in the configured units. Ensure that you include the sign of the value (+/-). **Caution** – The value in this field is the *ellipsoid height* of the reference mark, not the *elevation*. 8. To accept the base height, press \(\text{OK}\). The *Base Location Accept position* screen appears. 9. Do one of the following: a. To *accept* the base position, press \(\text{OK}\), the *Base Name* screen appears. b. To *reject* the base position, press \(\text{OK}\). The *Reject position* screen appears. Then press \(\text{OK}\) and the *Base Location – Edit Base Pos* screen appears. 10. Press \(\text{OK}\). The *Base Name* screen appears. Entering the name and activating the ID of the base station 1. In the *Base Location - Edit Base Pos* or *Base Location - Set from Avg* screen, press $\checkmark$ until the *Base Name* screen appears. To activate the name entry, press $>$. 2. To enter each character of the name of the base location (reference mark), press $\wedge$ or $\vee$. To move to the next character, press $>$. *Note – Each base location must have a unique name that can be up to eight characters long. It is important to record the name of the base location, along with the coordinates that you recorded (see Edit Base Position method, page 25 and Set From Avg method, page 26).* 3. To accept the base name, press $=$ and then $\checkmark$. The *Base ID* screen appears: 4. To activate the *Base ID* field, press $>$. *Note – The base station only transmits RTK corrections so you only need to complete the CMR ID field.* 5. To move the cursor to the *CMR ID* field, press $>$. To enter each number in the field, press $\wedge$ or $\vee$. The CMR ID is for the radio, not the base location. It must be between 00 and 31, and must be unique for each radio that is transmitting in your area or on your farm. 6. To accept the base ID, press \(\rightarrow\), then \(\checkmark\). The *CMR Out* screen appears: 7. Press \(\gg\) to activate the *CMR Out* field. Press \(\wedge\) or \(\vee\) to change the *CMR Out* field to Port A. 8. Press \(\gg\) to move to the *CMR type* field. Press \(\wedge\) or \(\vee\) to change the *CMR type* field to CMR+. 9. Press \(\rightarrow\) to accept the *CMR Out* screen and then press \(\checkmark\) to move to the *RTCM Out* screen. *Note – If you have already configured the display mode to CMR RTK Base, as described in Autopilot Base Station Display Mode, page 11, or if the RTCM screen does not appear, go to Turning on the Use For Autobase function, page 31, Step 1.* 10. Press \(\wedge\) or \(\vee\) to change the *RTCM out* field to Off. 11. Press \(\rightarrow\) to accept the *RTCM out* field and then press \(\checkmark\) to move to the *Use For Autobase* screen. Turning on the Use For Autobase function Trimble recommends that you use the Autobase function of the MS750 receiver. Using the Autobase function allows you to return to a reference mark, set up the base station, and apply power to the MS750. The receiver then automatically finds the base location for the reference mark. When you select Yes in the *Use For Autobase* field, you must save the base station configuration in an application file. For more information, see Saving an Application File, page 35. You can store up to 10 base locations in the receiver memory. 1. Press $\rightarrow$ to activate the *Use For Autobase* field. Press $\wedge$ or $\vee$ to change the field to Yes. 2. Press $\rightarrow$ to accept the *Use For Autobase* screen and then press $\checkmark$. 3. The *Exit Config* screen appears. Press $\rightarrow$ to exit the base station configuration. 5 Configuring and Starting an Autopilot Base Station Working with Application Files In this chapter: - Introduction - Activating an Application File - Saving an Application File - Deleting an Application File Introduction When a base location is saved, an application file is created in the MS750 receiver. Up to 10 application files can be stored in the receiver. If you experience difficulty with the Autobase function, you may need to manipulate the base locations (appfile) manually. Do this in the CFG:Appfile screen (select Config Menus / Config GPS). In this screen you can activate an application file, and save or delete it. Activating an Application File Tip – To go directly to the Home screen from anywhere in the MS750 menu system, press ▲ and ▼ simultaneously one or more times. 1. In the MS750 Home screen, press ► until the Config Menus screen appears. 2. Press ▼ to display the Display Mode screen. 3. Press ► until Config GPS appears. 4. Press ▼ to enter the Config GPS screens and start (activate) an appfile. a. Press ► to activate the CFG: Appfile screen. The cursor should begin flashing on Start. b. Press ► to move the flashing cursor to the second line, and to the appfile name field. This field may be set to Current Active or [APPFILENAME] ACTIVE. 5. Press $\checkmark$ to scroll through the appfiles currently stored on the MS750. Press 4 to accept the APPFILENAME you want to use. The screen displays a message Wait... and will then display [APPFILENAME] ACTIVE. You can now go to Config Base Stn to view the base station configuration or make any changes to the configuration. **Saving an Application File** Before you can use a base location and the Autobase function, the base location must be saved as an application file (appfile) in the MS750 receiver. To do this: 1. In the Config Base Stn screen, press $>$ until the Config GPS screen appears. Press $\checkmark$ to configure GPS fields. 2. The CFG: Appfile Start screen appears. Press $>$ to activate the Appfile screen. 3. Press $\checkmark$ until Save appears as the appfile option. 4. Press $>$ to move the cursor to the appfile name on the second line. Enter the name for the appfile using $\wedge$ or $\vee$ to change the character and $>$ to move to the next character. **Tip** – To make it easy to identify the appfiles, Trimble recommends that you use the base name as the appfile name. 5. Press \(\text{OK}\) to save the appfile. The message Saved appears briefly next to the appfile name, then the message Active appears next to the appfile name to identify it as the active appfile. \[\text{CFG: Appfile Save}\] \[\text{Basename Saved}\] **Tip** – To go directly to the Home screen from anywhere in the MS750 menu system, press \(\text{A}\) and \(\text{V}\) simultaneously one or more times. ### Deleting an Application File You will need to delete appfiles when you have reached the 10 files limit or when you find duplicate appfiles on the receiver. Once an appfile is deleted, you cannot recover it. Make sure you are deleting the correct application file, and if you may need to use it at a later date, that you have backed it up on your computer. 1. In the *MS750 Home* screen, press \(\text{OK}\) until the *Config Menus* screen appears. 2. Press \(\text{V}\) to display the *Display Mode* screen. 3. Press \(\text{OK}\) until *Config GPS* appears. 4. Press \(\text{V}\) to enter the *Config GPS* screens and save an appfile. a. The *CFG: Appfile Start* screen appears. Press \(\text{OK}\) to activate the *Appfile* screen. b. Press \(\text{V}\) until *Del* is shown as the appfile option. c. Press \(\text{OK}\) to move the cursor to the appfile name (on the second line). Press \(\text{A}\) or \(\text{V}\) to scroll through the appfile names and find the appfile you want to delete. d. Press \(\text{OK}\) to delete the appfile. 5. The message *Deleted* is displayed next to the application file name for a brief period, and then is replaced by the active application file in the MS750. **Tip** – To go directly to the Home screen from anywhere in the MS750 menu system, press $\text{A}$ and $\text{V}$ simultaneously one or more times. 6 Working with Application Files Configuring the SiteNet 900 Radio In this chapter: - Introduction - Configuring the SiteNet 900 Radio Introduction Before you use the SiteNet 900 radio for the first time, you must configure it as a base radio and set it to the correct network setting. To configure the radio, use the COMMSET software. Before you can configure the radio, make sure that: - The SiteNet 900 radio is connected to the MS750 receiver using the MS750 to SiteNet 900 cable (P/N 38968-01) - The MS750 receiver is connected to power using the MS750 to battery power cable (P/N 44087-10) Configuring the SiteNet 900 Radio 1. Connect the SiteNet 900 radio to the serial (COM) port of your computer using the DE-9 connector on the MS750 to SiteNet 900 cable (P/N 38968-01). 2. On your computer, start the COMMSET software. The following dialog appears: 3. In the first field, enter or select the serial port that you are using on your computer from the drop-down list. 4. Click **Connect**. The software will attempt to connect with the SiteNet 900 radio. If the software connects, the following dialog appears: 5. If the software fails to connect: a. Check all connections. b. Check to see if both the radio and the receiver have power. Check the LED on the bottom of the SiteNet radio and the receiver. c. Check that you have selected the correct serial port (COM port). Once the software connects, the following dialog appears: To change the radio’s settings: Press [Reset] to set the radio to its factory defaults. - or - Change any or all of the following settings: 1. Select a new serial port baud rate: 38400 2. Select a new serial port parity setting: None 3. Select a new mode: - Base - Rover - Repeater #1 - Repeater #2 - Repeater #3 - Repeater #4 - Is Repeater within range of Base? 4. Then press [Set] to apply your selections. To return to the main menu with no changes, press [Cancel]. More help is available by pressing [Help]. If you want more information about the radio, press [Details]. If you have an update file, and wish to update, press [Update]. For advanced CMR options, press [Advanced]. 6. The network settings must match the settings for the rover SiteNet 900 radio. To set up the correct network, click Network Settings. The following dialog appears: Note – If you know of another Autopilot system in your area using a SiteNet 900 radio, make every effort to determine what network setting they are using. Then set your network settings to another network. 7. Select a network setting that matches your rover radio. To do this, click on the Network Settings list and select a network. Click OK. You are returned to the SiteNet900 Properties dialog. 8. In the SiteNet900 Properties dialog, change the following settings: – In the first item, enter or select a new serial port baud rate of 38400. – In the second item, enter or select None. – In the third item, select the Base option. Note – To function properly, the radio must be set to base. 9. There are some optional settings that you can use with the base radio. Click **Advanced** in the *SiteNet900 Properties* dialog. The following dialog appears: 10. In the Advanced CMR dialog, set the following: - Select the *CMR out both serial ports (for debugging)* check box. *Note – Service personnel use this option to debug any radio problems. Turning it on does not affect the radio performance.* - If you are in an area with heavy radio (RF) traffic, select the *Turbo mode (for highly jammed areas)* check box. *Note – This mode increases the power consumption of the base radio. Only use this option where necessary.* - In the *Higher rate CMRs* group, select the *Allow normal 1Hz CMR transmission* option so that CMR works with the Autopilot system. - Click **Set** to return to the *SiteNet900 Properties* dialog. 11. In the *SiteNet900 Properties* dialogue, click **Set** (item 4) to set the SiteNet 900 radio to the selected properties. The software returns to the *Commset* dialog. 12. Click **Exit** to complete the configuration. 13. Disconnect the MS750 to SiteNet 900 cable (P/N 38968-01) DE-9 from the serial port of your computer. The base SiteNet 900 radio is now ready to use. *Note – Make sure that the rover SiteNet 900 radio is configured as a rover with the same network setting.* 7 Configuring the SiteNet 900 Radio Express Autobase In this appendix: - Introduction - Configuring a New Base Station - Using the Express Autobase Function Introduction This appendix describes how to use Express Autobase and how to configure a new base station when Autobase is not available or fails. For help with navigating the menu system, see Figure A.1, page 51, Figure A.2, page 59, and Figure A.5, page 64. At start up, the MS750 receiver begins to search its system for a base file that contains a base station location (position) that matches the current GPS antenna location. If the receiver finds such a file, it loads the file and begins transmitting corrections to the rover. This is called Autobase. If the receiver does not find the required file, you must configure the receiver manually. Figure A.1 on the following page shows the Config menu structure, Express Autobase Display mode. Figure A.1 Config menu structure - Express Autobase Display mode Configuring a New Base Station This section describes how to configure the base station and start transmitting corrections from the base to the rover (tractor). **Caution** – The Express Autobase Display mode sets the default antenna type to Zephyr Geodetic (TP/N 41249-00). Do not use Express Autobase with a uCentered 13” GP (TP/N 36569-00) antenna. **Tip** – To go directly to the Home screen from anywhere in the MS750 menu system, press $\text{A}$ and $\text{V}$ simultaneously one or more times. If the current location does *not* have a base file associated with it, the following screen appears when you start the receiver: ``` No Bases Found Autobase Failed ``` To go directly to the *Config Base Stn* menu, press any key on the receiver front panel. The following screen appears: ``` Config Base Stn Press V to Enter ``` To configure a new base station for your current location, you need to complete the following procedures: - Entering the GPS antenna height, page 53 (this is the height above the ground) - Establishing a location, page 54 for the base station (latitude, longitude, and height) • Setting the base name, page 55 • Saving the base station settings, page 55 to the receiver memory. **Entering the GPS antenna height** 1. Measure the height of the base antenna (in meters) above the reference mark. Use one of the following methods: – **Fixed Height:** The extended height of the center pole, plus the height of the radio bracket (0.027 m) and the extension pole (0.250 m) that has been added to extend the height of the GPS antenna. – **Variable Height:** The height of the tripod, plus the height of the radio bracket (0.027 m or 0.088 ft.) and the extension pole (0.250 m or 0.823 ft.) that has been added to extend the height of the GPS antenna (New Base Station). 2. Press $\checkmark$ to navigate to the *Base Ant Ht* screen. Press $>$ to activate the antenna height field: 3. Enter the antenna height: a. To move the cursor, press $>$. If you move the cursor when it is on the last character, it loops back to the first character. b. To change the value of each number, press $\wedge$ or $\vee$. c. To accept the base antenna height, press $=$. 4. Press $\checkmark$ to move to the next screen. Establishing a location The MS750 receiver uses the Set from Average method to establish a position (reference mark coordinates). This method averages positions over a period of time to provide GPS coordinates (WGS-84 latitude, longitude, and height). 1. From the *Set From Avg* screen, press $\text{ESC}$. The base location *Averaging* screen appears. A timer on the screen counts down from 60 seconds: To stop averaging and save the position, you can press $\text{ESC}$ at any time during the 60 seconds countdown time. However, the longer you average the position, the more accurate the base location will be. Trimble recommends that you allow the receiver to average for at least 30 seconds. 2. Once the *Base Location – Set From Avg* screen appears again, press $\text{OK}$ to move to the *Base Name* screen. Setting the base name 1. In the *Base Name* screen, press $\gg$ to activate the name entry field: ![CFG: Base Name](image) 2. Press $\Delta$ or $\nabla$ to enter each character of the base name. Press $\gg$ to move to the next character. The base name can be up to 8 characters long. Each base location in your area or on your farm must have a unique name. It is important to record the name of the base location, and the coordinates recorded in Establishing a location, page 54, Step 2. 3. To accept the base name, press $\Box$. Then press $\checkmark$ to move to the *Exit Config* screen. Saving the base station settings In the *Exit Config* screen, press $\Box$ to exit base station configuration: ![Exit Config](image) A base station file is automatically saved on the MS750 receiver. This file contains the antenna height, base station name and location, and preset base station parameters. The *Home* screen displays the message AUTOBASE. The receiver is now configured for the current location and is sending corrections to the rover system. Each time you return to this station, the receiver automatically finds the correct base file by matching the receiver’s current location to the location in the saved base file. *Note – If the Home screen does not display AUTOBASE, refer to the Troubleshooting table, page 65.* *Tip – If the screen shown is not the Home screen, you can press and simultaneously (usually twice) to go directly to the Home screen from anywhere in the MS750 menu system.* **Using the Express Autobase Function** Use the Autobase function as a tool to automatically configure and start the Autopilot base station. Each time the GPS antenna is set up on a reference mark, and power is applied to the receiver, the MS750 will use its current location to search each base file for a base location that is closest to the current location, see Saving the base station settings, page 55. If the receiver finds an base file with a location within 50 meters (164 feet), the base station is automatically configured and starts broadcasting corrections to the rover (tractor). 1. Carefully set up the base station over the reference mark using the steps outlined in Setting up the Base Station on a Reference Mark, page 16 *Caution – If the antenna height has changed since the last occupation, you must update the Antenna Height field to ensure that correct positions are generated.* 2. Apply power to the MS750. The receiver begins to track satellites and calculate its current position. In the *Home* screen, the message *Waiting For Pos* appears on the second line: ``` SVI 05 PDOP 2.8 WAITING FOR POS ``` Once the receiver has a position, the message *Searching* appears on the second line: ``` SVI 07 PDOP 1.9 SEARCHING.. ``` When the receiver finds a base location that matches the receiver location, *AUTOBASE* appears on the second line of the *Home* screen. This means that Autobase has started successfully and that the receiver is using the base file (appfile) to configure and start the base station. You can now begin operations: If you receive a message other than AUTOBASE, see Table A.1 below and the menu diagrams on the following pages for more information. | Message | Description | Action | |--------------------------|-----------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------| | AUTOBASE | Autobase start up is successful and the receiver is using the base file (appfile) to configure and start the base station. | Begin operations | | No Base Found | No Autobase base File (appfile) was found with a base location within 50 meters (164 feet) of the receiver’s current location. The Autobase function has failed. | If you are occupying this station for the first time, you must establish a base file for this station (reference mark). For more information, see Configuring a New Base Station, page 52. Make sure that you are occupying the correct reference mark. The reference marks must be well marked and protected from any possible damage. | | Autobase Failed | More than one base file (appfile) was found with a base location within 50 meters (164 feet) of the receivers current location. The Autobase function has failed. | Make sure you have not saved more than one base file with different names for the reference mark you are occupying. For information on how to delete any unnecessary files, see Figure A.3, page 61. | Figure A.2 shows the successful Autobase startup sequence. You are setting up on an existing base station and the location is saved as a file in the MS750. Press $\checkmark$ from the Status screen to verify the correct base file, position, and name. Figure A.2 Successful startup sequence This menu diagram shows the Autobase functionality when the current antenna position is not saved within the base file. This may occur in the following situations: - You are setting up on a new base station (the current location is not saved as a base file). - The GPS antenna position is not within 50 metres (164 feet) of a base station saved in a file (wrong reference mark). - There are no base station files that match. Review the base files. See Figure A.4, page 63. Figure A.3 shows the Autobase functionality when more than one file has the same base position. Figure A.3 Autobase functionality with more than one file with the same base position In this situation, you are setting up at a location that has two or more base station files associated with the location (latitude and longitude) of the base station. Determine which base files are duplicates and delete the unnecessary files. See Table A.4. To delete a file: 1. Navigate to the *Field Management* menu. 2. Press \( \triangleright \) twice and then press \( \checkmark \) to change the file management action to Del. 3. Press \( \triangleright \) and then press \( \checkmark \) to scroll through the base file names to find the one you want to delete. 4. Press \( \rightarrow \). Figure A.4 shows how to review base files, to determine if a file is missing or duplicated. To review base files: 1. Navigate to the *File Management* menu. 2. Press \( \gg \) twice and then press \( \downarrow \) to scroll through the base file names to review them. 3. Press \( \rightarrow \) when finished reviewing files, and then press \( \rightarrow \) again to exit the menu. Figure A.5 shows the Express Autobase screens and functionality. Figure A.5 Express Autobase screens and functionality Table A.2 shows the status messages that can appear in the Express Autobase screens. | Status Message | Description | |--------------------|-----------------------------------------------------------------------------| | WAITING FOR POS | Receiver is waiting for the first position from the satellites | | SEARCHING.. | Receiver is searching for a base file that contains the current position of the GPS antenna | | AUTOBASE | Receiver has found a base file with a position that matches the position of the current GPS antenna. Receiver is working as an Autobase base station. | | SEARCH FAILED | Autobase function has failed for one of the following reasons: | | | No Bases Found – Autobase Failed | | | >1 Base Found – Autobase Failed | | OLD POSITION | The receiver has insufficient satellites for a valid position. | | | Typically seen at receiver startup. This message is removed when four or more satellites are available. | | | If this message appears for a long time and the screen displays SV: 00, check the connections to the GPS antenna. | **Display Mode Menu** Use the *Display Mode* menu to configure the number and types of menus that are available for displaying information and for configuring the receiver. The default display mode setting is Express Autobase. For more information, refer to your *MS Series* manual. File Management Menu Use the *File Management* menu to manage base station files (Appfiles). Use this menu to start, save, or delete a file. The default display mode setting, *Express Autobase*, automatically saves and starts your file when you create a base station, so there is no need to save or start a file in this menu when working in the *Express Autobase* display mode. You may need to use the delete function to remove unnecessary files, and so on. Notes Notes
Plasma gasification is one of the technologies leading us into the future. Here, we look at the rise of PG and its status around the world, plus learn more about a new process which turns scrap carbon into energy using plasma gasification and alkaline fuel cells. by Peter Jones PG rated The benefits of plasma gasification Consider three factors occurring on a global scale: a looming energy gap, an overwhelming need for zero carbon waste technologies, and ambitious 2020 emissions targets. With landfill availability declining over the next three years due to rising gate fees, there is a need to find alternative waste management. The calorific value of waste is also important. MSW is of interest to power companies who want to continue supplying heat, electricity and hydrogen or fuel gases using environmentally-friendly technology. One British venture, Waste2Tricity, has a mission to convert scrap carbon into energy (SCIE) using the most efficient energy conversion process available – implementing a unique combination of proven plasma assisted gasification (PG) technology with new generation alkaline fuel cells. 2020 emissions targets We are currently at a crossroads. Familiar phrases including ‘low carbon energy’ and ‘emission reduction pledges’ form the news agenda. With 2020 fast approaching, the four regions of the world expected to emit almost two thirds of the carbon between now and 2050 are under tremendous pressure. India has pledged to curb the carbon emitted relative to the growth of its economy – its carbon intensity – by 30%, while China may cut its carbon intensity by more than 40%. The EU has pledged a 20% cut in carbon emissions, and the US 17%. The recent Copenhagen climate change conference failed to find a solution to the problem. With the attention of the world’s media on it for two weeks, negotiations ended in a weak political agreement that has no legal standing and does not force any country to reduce emissions. With power stations going off-line and higher landfill taxes in effect, more low carbon commercial technologies are needed if we are to meet 2020 emission targets. Gasification and fuel cells – an unbeatable combination? Already a commonly-used technology in the chemical, fertilizer, and coal-to-liquids industries, gasification applications are becoming increasingly diverse. The electricity/power industry has benefited from gasification technology for over 35 years. The waste-to-energy (WTE) industry is also realising the environmental and economic benefits of gasification and is increasingly attracting attention as a potential solution to challenges surrounding renewable energy and reducing landfill capacities. Enabling the recovery of available energy from low value materials like municipal solid waste (MSW), gasification technology can change waste disposal from an environmental headache into a commercially viable proposition. WTE provides an interesting option for large retail energy users in so far as they can improve security of supply via a co-location model whilst reducing cost of energy in process. It has the potential to increase the amount of renewable electricity generated globally, reduce environmental impacts as well as reduce waste disposal costs. With some nations consuming more than others, the quality and quantity of waste varies across borders. And so does the way it is managed. Yet only two MSW plasma-based systems already operate commercially, with plans afoot for a large new plant in the US. The company supplying plasma gasification systems for all these facilities is Westinghouse Plasma Corporation (WPC), owned by Alter NRG, and is considered the world leader in PG. The two plants are based in Japan, built by Hitachi Metals, Ltd., and have processed MSW using WPC plasma gasification technology since 2002. The largest facility, located in Utashinai, was constructed in 2002 and became fully operational in 2003, processing a 220 tonnes per day mixture of auto shredder residue and MSW to produce electricity. The second facility near the neighbouring cities of Mihama and Mikata, treats 20 tonnes per day of MSW and four tonnes per day of sewage sludge for the production of heat utilized in a municipal waste water treatment facility and was commissioned in 2002. WPC has two other projects under pipeline in New York and Minnesota. Spurred by volatile oil and natural gas prices, and more stringent environmental regulations, it is generally acknowledged that CO$_2$ management will be a stricter requirement in future energy production. As an existing clean energy technology that is flexible and reliable, worldwide gasification capacity is projected to grow 70% by 2015. **Waste2Tricity: the facts** *Waste2Tricity was established to implement the most efficient energy conversion process available using a unique combination of new generation alkaline fuel cells alongside plasma gasification and other existing proven technologies.* **Waste2Tricity’s 2009 timelines:** | Date | Event | |------------|----------------------------------------------------------------------| | February | Waste2Tricity secures exclusive agreement for supply of new generation alkaline fuel cells from AFC Energy plc. | | March | Waste2Tricity submits bid to London Waste & Recycling Board for conversion of London’s rubbish into green energy. | | July | Professor Ian Arbon, CEng, CEnv takes Chair of Waste2Tricity. | | July | Waste2Tricity announces joint venture with Westinghouse New Energy Ltd to produce ultra low carbon emission electricity from coal. | | October | Waste2Tricity appointment by Alter NRG, the owner of Westinghouse Plasma Corp, as its exclusive UK sales representative for plasma assisted gasification technology. | | November | John Hall appointed managing director. | **The Waste2Tricity Process** *Waste2Tricity obtains a flow of mixed waste stream, either in conjunction with an existing waste management company, allowing the installation to be established on an existing site, or from a commercial customer, such as a supermarket chain that back hauls its waste to a central depot. Ideally, the mix comprises 35% organics, 35% paper and cardboard, 25% plastic and 5% other materials.* Using plasma torches in an oxygen-starved environment, very high temperatures of 5000°C+ decompose the waste into very simple molecules, which comes out as a syngas composed of only hydrogen, carbon monoxide and a small amount of carbon dioxide. In contrast to other processes, the plasma gasification process emits fewer pollutant gases, and no bottom ash. Harmful particles, such as dioxins, are destroyed in the process. The main by-product – inert vitrified slag – can be used as road-building aggregates, reducing demand for gravel extraction. As well as these advantages, plasma gasification has potentially the lowest CO$_2$ impact per tonne of waste used of any of these technologies. The syngas from plasma gasification can be used to fuel an internal combustion engine, or a gas turbine, enabling the generation of electricity from a greater quantity of waste than can be achieved by a conventional steam cycle. Taking a further innovative step forward, new UK venture Waste2Tricity will combine plasma gasification with new generation fuel cells, potentially increasing the net output of electricity by 60% over an internal combustion engine generation system, or by 130% over a steam cycle system.* So why then, has the conversion of MSW into power yet to be adopted on a large scale? Up until now, landfill tipping fees have been so low that it has been cheaper to simply bury waste. Due to high initial capital costs, there have also traditionally been concerns over low efficiency, emissions and waste from MSW incineration or from gasification systems. This said, however, proper economic consideration needs to be taken to ensure that plasma gasification plants established for municipalities do not end up costing them more than landfill tipping fees. **A strong business case** An exclusive UK sales agreement with Alter NRG, and exclusive rights to new generation alkaline fuel cells under development by AFC Energy, gives Waste2Tricity with a distinct advantage in the UK market. Alter NRG’s technology is ideal for producing a hydrogen stream, from waste and other low value feedstocks, which is suitable as fuel for AFC’s technology. AFC has completed initial field trials at AkzoNobel’s chlor–alkali plant in Bitterfeld, Germany. Alkaline fuel cell technology is the best-known fuel cell technology today, and is even used by NASA. Waste2Tricity’s use of alkaline fuel cells is projected to increase the net output of electricity by a maximum of 60% over an ICE (internal combustion engine) and by around 35% for a steam turbine. This will result in the most efficient and economic means of converting scrap carbon into energy, generating 2100 kWh of electricity from every tonne of MSW currently sent to landfill. Waste2Tricity estimates that the cost of generating electricity could be less than 3p per KWh (at today’s prices). **Challenges ahead for plasma gasification** All over the world, plasma gasification technology could make the disposal of waste commercially viable and increase the amount of renewable electricity. Local laws and planning issues are the main hurdles to be overcome and are often influenced by the capacity of the proposed plant. However, there are ways to make developments more acceptable, such as building them on existing landfill sites and utilizing existing infrastructure such as roads built for waste transport. It all depends, however, on whether local authorities have the budget to convert existing incineration plants into WTE plants, using technologies such as plasma gasification and fuel cells. In a global financial climate emerging from recession, funding is a major hurdle. But WTE using plasma gasification can be commercially viable and profitable, especially when compared to other renewables such as wind, hydro/wave, geothermal and solar/photovoltaic, which are gridlocked in competition with subsidies and venture capital. These renewables cost around £3.5million per megawatt (MW) capacity, which is less profitable when compared with plasma gasification. The issues with funding and new technologies revolve around the conservative position taken by the waste companies which is only now starting to ‘loosen up’ as the reality of a low carbon economy starts to become apparent. Traditionally, waste companies in countries such as the UK have been wedded to landfill and mass burn as service offerings and as a consequence are reluctant to invest in innovation until it is well proven. Landfill on the other hand is a proven solution, taking just about anything. This resistance to adoption of new technologies reflects in the bankability of any emerging technology. This situation is exacerbated enormously by the liquidity crisis. Although venture capital funding in equity remains firm for the ‘right’ projects, with cash-generating asset backed opportunities, non revenue, not fully-demonstrated technologies have received almost zero investment. Government investment schemes have been largely ineffectual, either requiring matching funds or other hurdles that make the monies inaccessible. Increasing levels of recycling and composting could reduce the carbon content of waste, reducing converting energy per tonne. But PG works hard in hand with physical segregation of specialist material streams for recycling because it can accept suitably-dried material containing all forms of carbon-based content. Due to cross contamination, much useful material is rendered valueless for recycling, so the advice remains to reduce, re-use, recycle and compost as much as possible, but whatever waste is left over can and should be utilized for WTE. The rise of PG will help indicate the public about WTE and SCIF (special chemical and other raw) technologies, combined with strict emission monitoring by legislation, allow us to recover energy from waste, reduce landfill, produce energy and lower CO$_2$ emissions. If we are ever to fully harness the benefits of WTE, governments must now accept the fact that energy and waste are connected. **Peter Jones** is a director of Waste2Tricity *e-mail: email@example.com* ■ This article is on-line. Please visit [www.waste-management-world.com](http://www.waste-management-world.com)
Back-translation for discovering distant protein homologies Marta Gîrdea, Laurent Noé, Gregory Kucherov To cite this version: Marta Gîrdea, Laurent Noé, Gregory Kucherov. Back-translation for discovering distant protein homologies. the 9th International Workshop in Algorithms in Bioinformatics (WABI), Sep 2009, Philadelphia, United States. 5724, pp.108-120, 2009, Lecture Notes in Computer Science. <http://www.springerlink.com/content/3236004m84465n7j/>. <10.1007/978-3-642-04241-6_10>. <inria-00448741> Back-translation for discovering distant protein homologies Marta Girdea, Laurent Noé, and Gregory Kucherov* INRIA Lille - Nord Europe, LIFL/CNRS, Université Lille 1, 59655 Villeneuve d’Ascq, France Abstract. Frameshift mutations in protein-coding DNA sequences produce a drastic change in the resulting protein sequence, which prevents classic protein alignment methods from revealing the proteins’ common origin. Moreover, when a large number of substitutions are additionally involved in the divergence, the homology detection becomes difficult even at the DNA level. To cope with this situation, we propose a novel method to infer distant homology relations of two proteins, that accounts for frameshift and point mutations that may have affected the coding sequences. We design a dynamic programming alignment algorithm over memory-efficient graph representations of the complete set of putative DNA sequences of each protein, with the goal of determining the two putative DNA sequences which have the best scoring alignment under a powerful scoring system designed to reflect the most probable evolutionary process. This allows us to uncover evolutionary information that is not captured by traditional alignment methods, which is confirmed by biologically significant examples. 1 Introduction In protein-coding DNA sequences, frameshift mutations (insertions or deletions of one or more bases) can alter the translation reading frame, affecting all the amino acids encoded from that point forward. Thus, frameshifts produce a drastic change in the resulting protein sequence, preventing any similarity to be visible at the amino acid level. When the coding DNA sequence is relatively well conserved, the similarity remains detectable at the DNA level, by DNA sequence alignment, as reported in several papers, including [1,2,3,4]. However, the divergence often involves additional base substitutions. It has been shown [5,6,7] that, in coding DNA, there is a base compositional bias among codon positions, that does not apply when the translation reading frame is changed. Hence, after a reading frame change, a coding sequence is likely to undergo base substitutions leading to a composition that complies with this bias. Amongst these substitutions, synonymous mutations (usually occurring on the third position of the codon) are more likely to be accepted by natural selection, * On leave in J.-V.Poncelet Lab, Moscow, Russia since they are silent with respect to the gene’s product. If, in a long evolutionary time, a large number of codons in one or both sequences are affected by these changes, the sequence may be altered to such an extent that the common origin becomes difficult to observe by direct DNA comparison. In this paper, we address the problem of finding distant protein homologies, in particular when the primary cause of the divergence is a frameshift. We achieve this by computing the best alignment of DNA sequences that encode the target proteins. This approach relies on the idea that synonymous mutations cause mismatches in the DNA alignments that can be avoided when all the sequences with the same translation are explored, instead of just the known coding DNA sequences. This allows the algorithm to search for an alignment by dealing only with non-synonymous mutations and gaps. We designed and implemented an efficient method for aligning putative coding DNA sequences, which builds expressive alignments between hypothetical nucleotide sequences that can provide some information about the common ancestral sequence, if such a sequence exists. We perform the analysis on memory-efficient graph representations of the complete set of putative DNA sequences for each protein, described in Section 3.1. The proposed method, presented in Section 3.2, consists of a dynamic programming alignment algorithm that computes the two putative DNA sequences that have the best scoring alignment under an appropriate scoring system (Section 3.3) designed to reflect the actual evolution process from a codon-oriented perspective. While the idea of finding protein relations by frameshifted DNA alignments is not entirely new, as we will show in Section 2 in a brief related work overview, Section 4 – presenting tests performed on artificial data – demonstrates the efficiency of our scoring system for distant sequences. Furthermore, we validate our method on several pairs of sequences known to be encoded by overlapping genes, and on some published examples of frameshifts resulting in functional proteins. We briefly present these experiments in Section 5, along with a study of a protein family whose members present high dissimilarity on a certain interval. The paper is concluded in Section 6. 2 Related Work The idea of using knowledge about coding DNA when aligning amino acid sequences has been explored in several papers. A non-statistical approach for analyzing the homology and the “genetic semi-homology” in protein sequences was presented in [8,9]. Instead of using a statistically computed scoring matrix, amino acid similarities are scored according to the complexity of the substitution process at the DNA level, depending on the number and type (transition/transversion) of nucleotide changes that are necessary for replacing one amino acid by the other. This ensures a differentiated treatment of amino acid substitutions at different positions of the protein sequence, thus avoiding possible rough approximations resulting from scoring them equally, based on a classic scoring matrix. The main drawback of this approach is that it was not designed to cope with frameshift mutations. Regarding *frameshift mutation discovery*, many studies [1,2,3,4] preferred the plain BLAST [10,11] alignment approach: BLASTN on DNA and mRNA, or BLASTX on mRNA and proteins, applicable only when the DNA sequences are sufficiently similar. BLASTX programs, although capable of insightful results thanks to the six frame translations, have the limitation of not being able to transparently manage frameshifts that occur inside the sequence, for example by reconstructing an alignment from pieces obtained on different reading frames. An interesting approach for *handling frameshifts at the protein level* was developed in [12]. Several substitution matrices were designed for aligning amino acids encoded on different reading frames, based on nucleotide pair matches between respective codons. This idea has the advantage of being easy to use with any classic protein alignment tool. However, it lacks flexibility in gap positioning. On the subject of *aligning coding DNA in presence of frameshift errors*, some related ideas were presented in [13,14]. The author proposed to search for protein homologies by aligning their *sequence graphs* (data structures similar to the ones we describe in Section 3.1). The algorithm tries to align pairs of codons, possibly incomplete since gaps of size 1 or 2 can be inserted at arbitrary positions. The score for aligning two such codons is computed as the maximum substitution score of two amino acids that can be obtained by translating them. This results in a complex, time costly dynamic programming method that basically explores all the possible translations. In Section 3.2, we present an algorithm addressing the same problem, more efficient since it aligns symbols, not codons, and more flexible with respect to scoring functions. Additionally, we propose to use a scoring system relying on codon evolution rather than amino acid translations, since we believe that, in frameshift mutation scenarios, the information provided by DNA sequence dynamics is more relevant than amino acid similarities. ## 3 Our approach to distant protein relation discovery The problem of inferring homologies between distantly related proteins, whose divergence is the result of frameshifts and point mutations, is approached in this paper by determining the best pairwise alignment between two DNA sequences that encode the proteins. Given two proteins $P_A$ and $P_B$, the objective is to find a pair of DNA sequences, $D_A$ and $D_B$, such that $\text{translation}(D_A) = P_A$ and $\text{translation}(D_B) = P_B$, which produce the best pairwise alignment under a given scoring system. The alignment algorithm (described in Section 3.2) incorporates a gap penalty that limits the number of frameshifts allowed in an alignment, to comply with the observed frequency of frameshifts in a coding sequence’s evolution. The scoring system (Section 3.3) is based on possible mutational patterns of the sequences. This leads to reducing the false positive rate and focusing on alignments that are more likely to be biologically significant. 3.1 Data structures An explicit enumeration and pairwise alignment of all the putative DNA sequences is not an option, since their number increases exponentially with the protein’s length\(^1\). Therefore, we represent the protein’s “back-translation” (set of possible source DNAs) as a directed acyclic graph, whose size depends linearly on the length of the protein, and where a path represents one putative sequence. As illustrated in Figure 1(a), the graph is organized as a sequence of length \(3n\) where \(n\) is the length of the protein sequence. At each position \(i\) in the graph, there is a group of nodes, each representing a possible nucleotide that can appear at position \(i\) in at least one of the putative coding sequences. Two nodes at consecutive positions are linked by arcs if and only if they are either consecutive nucleotides of the same codon, or they are respectively the third and the first base of two consecutive codons. No other arcs exist in the graph. Note that in the implementation, the number of nodes is reduced by using the IUPAC nucleotide codes. If the amino acids composing a protein sequence are non-ambiguous, only 4 extra nucleotide symbols – \(R\), \(Y\), \(H\) and \(N\) – are necessary for their back-translation. In this condensed representation, the number of ramifications in the graph is substantially reduced, as illustrated by Figure 1. More precisely, the only amino acids with ramifications in their back-translation are amino acids \(R\), \(L\) and \(S\), each encoded by 6 codons with different prefixes. 3.2 Alignment algorithm We use a dynamic programming method, similar to the Smith-Waterman algorithm, extended to data structures described in Section 3.1 and equipped with gap related restrictions. Given the input graphs \(G_A\) and \(G_B\) obtained by back-translating proteins \(P_A\) and \(P_B\), the algorithm finds the best scoring local alignment between two DNA sequences comprised in the back-translation graphs (illustrated in Figure 2). The alignment is built by filling each entry \(M[i, j, (\alpha_A, \alpha_B)]\) of a dynamic programming matrix \(M\), where \(i\) and \(j\) are positions of the first and second graph respectively, and \((\alpha_A, \alpha_B)\) is a pair of nodes that can be found in \(G_A\) at position \(i\), and in \(G_B\) at position \(j\), respectively. An example is given in Figure 3. The dynamic programming algorithm begins with a classic local alignment initialization (0 at the top and left borders), followed by the recursion step described in equation (1). The partial alignment score from each cell \(M[i, j, (\alpha_A, \alpha_B)]\) is computed as the maximum of 6 types of values: (a) 0 (similarly to the classic Smith-Waterman algorithm, only non-negative scores are considered for local alignments). (b) the substitution score of symbols \((\alpha_A, \alpha_B)\), denoted \(score(\alpha_A, \alpha_B)\), added to the score of the best partial alignment ending in \(M[i - 1, j - 1]\), provided that the partially aligned paths contain \(\alpha_A\) on position \(i\) and \(\alpha_B\) on position \(^1\) With the exception of \(M\) and \(W\), which have a single corresponding codon, all amino acids are encoded by 2, 3, 4 or 6 codons. $j$ respectively; this condition is ensured by restricting the entries of $M[i-1, j-1]$ to those labeled with symbols that precede $\alpha_A$ and $\alpha_B$ in the graphs. (c) the cost $singleGapPenalty$ of a frameshift (gap of size 1 or extension of a gap of size 1) in the first sequence, added to the score of the best partial alignment that ends in a cell $M[i, j-1, (\alpha_A, \beta_B)]$, provided that $\beta_B$ precedes $\alpha_B$ in the second graph; this case is considered only if the number of allowed frameshifts on the current path is not exceeded, or a gap of size 1 is extended. (d) the cost of a frameshift in the second sequence, added to a partial alignment score defined as above. (e) the cost $tripleGapPenalty$ of removing an entire codon from the first sequence, added to the score of the best partial alignment ending in a cell $M[i, j-3, (\alpha_A, \beta_B)]$. (f) the cost of removing an entire codon from the second sequence, added to the score of the best partial alignment ending in a cell $M[i-3, j, (\beta_A, \alpha_B)]$ We adopted a non-monotonic gap penalty function, which favors insertions and deletions of full codons, and does not allow a large number of frameshifts – very rare events, usually eliminated by natural selection. As can be seen in equation (1), two particular kinds of gaps are considered: i) **frameshifts** – gaps of size 1 or 2, with high penalty, whose number in a local alignment can be limited, and ii) codon skips – gaps of size 3 which correspond to the insertion or deletion of a whole codon. \[ M[i, j, (\alpha_A, \alpha_B)] = \begin{cases} 0 & (a) \\ M[i - 1, j - 1, (\beta_A, \beta_B)] + score(\alpha_A, \alpha_B), & \beta_k \in pred(\alpha_k); \quad (b) \\ (M[i, j - 1, (\alpha_A, \beta_B)] + singleGapPenalty), & \beta_B \in pred(\alpha_B); \quad (c) \\ (M[i - 1, j, (\beta_A, \alpha_B)] + singleGapPenalty), & \beta_A \in pred(\alpha_A); \quad (d) \\ (M[i, j - 3, (\alpha_A, \beta_B)] + tripleGapPenalty), & j \geq 3 \quad (e) \\ (M[i - 3, j, (\beta_A, \alpha_B)] + tripleGapPenalty), & i \geq 3 \quad (f) \end{cases} \] ### 3.3 Translation-dependent scoring function In this section, we present a new translation-dependent scoring system suitable for our alignment algorithm. The scoring scheme we designed incorporates information about possible mutational patterns for coding sequences, based on a codon substitution model, with the aim of filtering out alignments between sequences that are unlikely to have common origins. Mutation rates have been shown to vary within genomes, under the influence of several factors, including neighbor bases [15]. Consequently, a model where all base mismatches are equally penalized is oversimplified, and ignores possibly precious information about the context of the substitution. With the aim of retracing the sequence’s evolution and revealing which base substitutions are more likely to occur within a given codon, our scoring system targets pairs of triplets $(\alpha, p, a)$, were $\alpha$ is a nucleotide, $p$ is its position in the codon, and $a$ is the amino acid encoded by that codon, thus differentiating various contexts of a substitution. There are 99 valid triplets out of the total of 240 hypothetical combinations. Pairwise alignment scores are computed for all possible pairs of valid triplets $(t_1, t_2) = ((\alpha_1, p_1, a_1), (\alpha_2, p_2, a_2))$ as a classic log-odds ratio: \[ score(t_1, t_2) = \lambda \log \frac{f_{t_1t_2}}{b_{t_1t_2}} \] where $f_{t_1t_2}$ is the frequency of the $t_1 \leftrightarrow t_2$ substitution in related sequences, and $b_{t_1t_2} = p(t_1)p(t_2)$ is the background probability. In order to obtain the foreground probabilities $f_{t_1t_2}$, we will consider the following scenario: two proteins are encoded on the same DNA sequence, on different reading frames; at some point, the sequence was duplicated and the two copies diverged independently; we assume that the two coding sequences undergo, in their independent evolution, synonymous and non-synonymous point mutations, or full codon insertions and removals. The insignificant amount of available real data that fits our hypothesis does not allow classical, statistical computation of the foreground and background probabilities. Therefore, instead of doing statistics on real data directly, we will rely on codon frequency tables and codon substitution models. We assume that codon substitutions in our scenarios can be modeled by a Markov model presented in [16]\footnote{Another, more advanced codon substitution model, targeting sequences with overlapping reading frames, is proposed and discussed in [17]. It does not fit our scenario, because it is designed for overlapping reading frames, where a mutation affects both translated sequences, while in our case the sequences become at one point independent and undergo mutations independently.} which specifies the relative instantaneous substitution rate from codon $i$ to codon $j$ as: $$Q_{ij} = \begin{cases} 0 & \text{if } i \text{ or } j \text{ is a stop codon, or} \\ & \text{if } i \rightarrow j \text{ requires more than 1 nucleotide substitution}, \\ \pi_j & \text{if } i \rightarrow j \text{ is a synonymous transversion}, \\ \pi_j \kappa & \text{if } i \rightarrow j \text{ is a synonymous transition}, \\ \pi_j \omega & \text{if } i \rightarrow j \text{ is a nonsynonymous transversion}, \\ \pi_j \kappa \omega & \text{if } i \rightarrow j \text{ is a nonsynonymous transition}. \end{cases}$$ for all $i \neq j$. Here, the parameter $\omega$ represents the nonsynonymous-synonymous rate ratio, $\kappa$ the transition-transversion rate ratio, and $\pi_j$ the equilibrium frequency of codon $j$. As in all Markov models of sequence evolution, absolute rates are found by normalizing the relative rates to a mean rate of 1 at equilibrium, that is, by enforcing $\sum_i \sum_{j \neq i} \pi_i Q_{ij} = 1$ and completing the instantaneous rate matrix $Q$ by defining $Q_{ii} = -\sum_{j \neq i} Q_{ij}$ to give a form in which the transition probability matrix is calculated as $P(\theta) = e^{\theta Q}$ [18]. Evolutionary times $\theta$ are measured in expected number of nucleotide substitutions per codon. With this codon substitution model, $f_{t_i,t_j}$ can be deduced in several steps. Basically, we first need to identify all pairs of codons with a common subsequence, that have a perfect semi-global alignment (for instance, codons $CAT$ and $ATG$ satisfy this condition, having the common subsequence $AT$; this example is further explained below). We then assume that the codons from each pair undergo independent evolution, according to the codon substitution model. For the resulting codons, we compute, based on all possible original codon pairs, $p((\alpha_i,p_i,c_i),(\alpha_j,p_j,c_j))$ – the probability that nucleotide $\alpha_i$, situated on position $p_i$ of codon $c_i$, and nucleotide $\alpha_j$, situated on position $p_j$ of codon $c_j$ have a common origin (equation (5)). From these, we can immediately compute, as shown by equation (6), $p((\alpha_i,p_i,a_i),(\alpha_j,p_j,a_j))$, corresponding in fact to the foreground probabilities $f_{t_i,t_j}$, where $t_i = (\alpha_i,p_i,a_i)$ and $t_j = (\alpha_j,p_j,a_j)$. In the following, $p(c_1 \xrightarrow{\theta} c_2)$ stands for the probability of the event \textit{codon $c_1$ mutates into codon $c_2$ in the evolutionary time $\theta$}, and is given by $P_{c_1,c_2}(\theta)$. $c_1[\text{interval}_1] \equiv c_2[\text{interval}_2]$ states that codon $c_1$ restricted to the positions given by $\text{interval}_1$ is a sequence identical to $c_2$ restricted to $\text{interval}_2$. This is equivalent to having a word $w$ obtained by “merging” the two codons. For instance, if $c_1 = CAT$ and $c_2 = ATG$, with their common substring being placed in $\text{interval}_1 = [2..3]$ and $\text{interval}_2 = [1..2]$ respectively, $w$ is $CATG$. Finally, $p(c_1[\text{interval}_1] \equiv c_2[\text{interval}_2])$ is the probability to have $c_1$ and $c_2$, in the relation described above, which we compute as the probability of the word $w$ obtained by “merging” the two codons. This function should be symmetric, it should depend on the codon distribution, and the probabilities of all the words $w$ of a given length should sum to 1. However, since we consider the case where the same DNA sequence is translated on two different reading frames, one of the two translated sequences would have an atypical composition. Consequently, the probability of a word $w$ is computed as if the sequence had the known codon composition when translated on the reading frame imposed by the first codon, or on the one imposed by the second. This hypothesis can be formalized as: $$p(w) = p(w \text{ on } rf_1 \text{ OR } w \text{ on } rf_2) = p^{rf_1}(w) + p^{rf_2}(w) - p^{rf_1}(w) \cdot p^{rf_2}(w)$$ (4) where $p^{rf_1}(w)$ and $p^{rf_2}(w)$ are the probabilities of the word $w$ in the reading frame imposed by the position of the first and second codon, respectively. This is computed as the products of the probabilities of the codons and codon pieces that compose the word $w$ in the established reading frame. In the previous example, the probabilities of $w = CATG$ in the first and second reading frame are: $$p^{rf_1}(CATG) = p(CAT) \cdot p(G**) = p(CAT) \cdot \sum_{c:c \text{ starts with } G} p(c)$$ $$p^{rf_2}(CATG) = p(**C) \cdot p(ATG) = \sum_{c:c \text{ ends with } C} p(c) \cdot p(ATG)$$ The values of $p((\alpha_i, p_i, c_i), (\alpha_j, p_j, c_j))$ are computed as: $$\sum_{c'_i, c'_j, c'_i[interval_i] \equiv c'_j[interval_j], p_i \in interval_i, p_j \in interval_j} p(c'_i[interval_i]) \cdot p(c'_j[interval_j]) \cdot p(c'_i \xrightarrow{g} c_i) \cdot p(c'_j \xrightarrow{g} c_j)$$ (5) from which obtaining the **foreground probabilities** is straightforward: $$f_{t_i,t_j} = p((\alpha_i, p_i, a_i), (\alpha_j, p_j, a_j)) = \sum_{c_i \text{ encodes } a_i, c_j \text{ encodes } a_j} p((\alpha_i, p_i, c_i), (\alpha_j, p_j, c_j))$$ (6) The **background probabilities** of $(t_i, t_j)$, $b_{t_i,t_j}$, can be simply expressed as the probability of the two symbols appearing independently in the sequences: $$b_{t_i,t_j} = b_{(\alpha_i,p_i,a_i),(\alpha_j,p_j,a_j)} = \sum_{c_i \text{ encodes } a_i, c_j \text{ encodes } a_j} \pi_{c_i} \pi_{c_j}$$ (7) **Substitution matrix for ambiguous symbols** From matrices built as explained above, the versions that use IUPAC ambiguity codes for nucleotides (as proposed in the final paragraph of 3.1) can be computed: the score of pairing two ambiguous symbols is the maximum over all substitution scores for all pairs of nucleotides from the respective sets. **Score evaluation** The score significance is estimated according to the Gumbel distribution, where the parameters $\lambda$ and $K$ are computed with the method described in [19,20]. Since the forward alignment and the reverse complementary alignment are two independent cases with different score distributions, two parameter pairs, $\lambda_{fw}, K_{fw}$ and $\lambda_{rc}, K_{rc}$ are computed and used in practice. 4 Validation To validate the translation-dependent scoring system we designed in the previous section, we tested it on an artificial data set consisting in 96 pairs of protein sequences of average length 300. Each pair was obtained by translating a randomly generated DNA sequence on two different reading frames. Both sequences in each pair were then mutated independently, according to codon mutation probability matrices corresponding to each of the evolutionary times 0.01, 0.1, 0.3, 0.5, 0.7, 1.0, 1.5, 2.00 (measured in average number of mutations per codon). To this data set we applied four variants of alignment algorithms: i) classic alignment of DNA sequences using classic base substitution scores and affine gap penalties; ii) classic alignment of DNA sequences using a translation-dependent scoring scheme designed in Section 3.3; iii) alignment of back-translation graphs (Section 3.2) using classic base substitution scores and affine gap penalties; iv) alignment of back-translation graphs using a translation-dependent scoring scheme. For the tests involving translation-dependent scores, we used scoring functions corresponding to evolutionary times from 0.30 to 1.00. Table 1 briefly shows the e-values of the scores obtained with each setup when aligning sequence pairs with various evolutionary distances. While all variants perform well for highly similar sequences, we can clearly deduce the ability of the translation-dependent scores to help the algorithm build significant alignments between sequences that underwent important changes. | Scores (*) | Input type | Evolutionary distance between the aligned inputs | |------------|------------------|--------------------------------------------------| | | | 0.01 | 0.10 | 0.30 | 0.50 | 0.70 | 1.00 | 1.50 | 2.00 | | TDS 0.30 | graphs known DNAs | $10^{-17.8}$ | $10^{-17.1}$ | $10^{-14.9}$ | $10^{-12.1}$ | $10^{-10.9}$ | $10^{-8.8}$ | $10^{-6.1}$ | $10^{-3.1}$ | | | | $10^{-15.2}$ | $10^{-13.6}$ | $10^{-11.0}$ | $10^{-7.6}$ | $10^{-5.4}$ | $10^{-2.1}$ | $10^{-6}$ | $1.00$ | | TDS 0.50 | graphs known DNAs | $10^{-16.6}$ | $10^{-15.6}$ | $10^{-14.0}$ | $10^{-11.8}$ | $10^{-10.7}$ | $10^{-8.5}$ | $10^{-5.5}$ | $10^{-3.4}$ | | | | $10^{-14.0}$ | $10^{-12.8}$ | $10^{-10.5}$ | $10^{-7.5}$ | $10^{-6.1}$ | $10^{-3.4}$ | $10^{-6}$ | $10^{-1}$ | | TDS 0.70 | graphs known DNAs | $10^{-15.8}$ | $10^{-14.5}$ | $10^{-13.0}$ | $10^{-11.5}$ | $10^{-10.2}$ | $10^{-8.3}$ | $10^{-5.6}$ | $10^{-3.1}$ | | | | $10^{-13.0}$ | $10^{-12.0}$ | $10^{-10.1}$ | $10^{-7.6}$ | $10^{-6.4}$ | $10^{-4.2}$ | $10^{-1.5}$ | $10^{-7}$ | | TDS 1.00 | graphs known DNAs | $10^{-13.7}$ | $10^{-13.1}$ | $10^{-11.8}$ | $10^{-10.4}$ | $10^{-9.7}$ | $10^{-8.0}$ | $10^{-5.9}$ | $10^{-3.4}$ | | | | $10^{-11.7}$ | $10^{-11.0}$ | $10^{-9.3}$ | $10^{-7.0}$ | $10^{-6.5}$ | $10^{-4.6}$ | $10^{-2.1}$ | $10^{-8}$ | | classic scores | graphs known DNAs | $10^{-12}$ | $10^{-24}$ | $10^{-12}$ | $10^{-11}$ | $10^{-7}$ | $10^{-5}$ | $10^{-3}$ | $10^{-2}$ | *Table 1.* Order of the e-values of the scores obtained by aligning artificially diverged pairs of proteins resulted from the translation of the same ancestral sequence on two reading frames. (*) $TDS <evolutionary\ distance>$ = translation-dependent scores; classic substitution scores: match = 3, transversion = -4, transition = -2. The resulting alignments reveal that, even after many mutations, the translation-dependent scores manage to recover large parts of the original shared sequence, by correctly aligning most positions. On the other hand, with classic match/mismatch scores, the algorithm usually fails to find these common zones. Moreover, due to the large number of mismatches, the alignment has a low score, comparable to scores that can be obtained for randomly chosen sequences. This makes it difficult to establish whether the alignment is biologically meaningful or it was obtained by chance. This issue is solved by the translation-dependent scores by uneven substitution penalties, according to the codon mutation models. We conclude that the usage of translation-dependent scores makes the algorithm more robust, able to detect the common origins even after the sequences underwent many modifications, and also able to filter out alignments where the nucleotide pairs match by pure chance and not due to evolutionary relations. 5 Experimental results 5.1 Tests on known overlapping and frameshifted genes We tested the method on pairs of proteins known to be encoded by overlapping genes in viral genomes (phage X174 and Influenza A) and in E.coli plasmids, as well as on the newly identified overlapping genes \( yaaW \) and \( htgA \) from E.coli K12 [21]. In all cases, we obtained perfect identification of gene overlaps with simple substitution scores and with translation-dependent scoring matrices corresponding to low evolutionary distances (at most 1 mutation per codon). Translation-dependent scoring matrices of higher evolutionary distances favor, in some (rare) cases, substitutions instead of matches within the alignment. This is a natural consequence of increasing the codon’s chance to mutate, and it illustrates the importance of choosing a score matrix corresponding to the real evolutionary distance. Our method was also able to detect, directly on the protein sequences, the frameshifts resulting in functional proteins reported in [1,2,3,4]. 5.2 New divergence scenarios for orthologous proteins In this section we discuss the application of our method to FMR1NB (Fragile X mental retardation 1 neighbor protein) family. The Ensembl database [22] provides 23 members of this family, from mammalian species, including human, mouse, dog and cow. Their multiple alignment, provided by Ensembl, shows high dissimilarity on the first part (100 amino acids approximately), and good conservation on the rest of the sequence. We apply our alignment algorithm on proteins from several organisms, where the complete sequence is available. We performed our experiments with translation-dependent scoring matrices corresponding to 0.3, 0.5 and 0.7 mutations per codon. Given that, in our scenario (presented in section 3.3), the divergence is applied on two reading frames, this implies an overall mutation rate of 0.6, 1.0 and 1.4 mutations per codon respectively. Thus, the mutation rate per base reflected by our scores is less than 0.5, which is approximately the nucleotide substitution rate for mouse relative to human [23]. The number of allowed frameshifts was limited to 3. The gap penalties were set in all cases to -20 for codon indels, -20 for size 1 gaps and -5 for the extension of size 1 gaps (size 1 and size 2 gaps correspond to frameshifts). These choices were made so that the penalty for codon indels is higher than the average penalty for 3 substitutions. Figure 4 presents a fragment of the alignment obtained on the FMR1NB proteins of human (gene ID ENSG00000176988) and mouse (gene ID ENSMUSG00000062170). The algorithm finds a frameshift near the 100th amino acid, managing to align the initial part of the proteins at the DNA level. Similar frameshifted alignments are obtained for human vs. cow and human vs. dog, while alignments between proteins of primates do not contain frameshifts. The consistency of the frameshift position in these alignments supports the evidence of a frameshift event that might have occurred in the primate lineage. If confirmed, this frameshift would have modified the first topological domain and the first transmembrane domain of the product protein. Interestingly, the FMR1NB gene occurs nearby the Fragile X mental retardation 1 gene (FMR1), involved in the corresponding genetic disease [24]. 6 Conclusions In this paper, we addressed the problem of finding distant protein homologies, in particular affected by frameshift events, from a codon evolution perspective. We search for protein common origins by implicitly aligning all their putative coding DNA sequences, stored in efficient data structures called back-translation graphs. Our approach relies on a dynamic programming alignment algorithm for these graphs, which involves a non-monotonic gap penalty that handles differently frameshifts and full codon indels. We designed a powerful translation-dependent scoring function for nucleotide pairs, based on codon substitution models, whose purpose is to reflect the expected dynamics of coding DNA sequences. The method was shown to perform better than classic alignment on artificial data, obtained by mutating independently, according to a codon substitution model coding sequences translated with a frameshift. Moreover, it successfully detected published frameshift mutation cases resulting in functional proteins. We then described an experiment involving homologous mammalian proteins that showed little conservation at the amino acid level on a large region, and provided possible frameshifted alignments obtained with our method, that may explain the divergence. As illustrated by this example, the proposed method should allow to better explain a high divergence of homologous proteins and to help to establish new homology relations between genes with unknown origins. An implementation of our method is available at http://bioinfo.lifl.fr/path/. References 1. Raes, J., Van de Peer, Y.: Functional divergence of proteins through frameshift mutations. Trends in Genetics 21(8) (2005) 428–431 2. Okamura, K. et al.: Frequent appearance of novel protein-coding sequences by frameshift translation. Genomics 88(6) (2006) 690–697 3. Harrison, P., Yu, Z.: Frame disruptions in human mRNA transcripts, and their relationship with splicing and protein structures. BMC Genomics 8 (2007) 371 4. Hahn, Y., Lee, B.: Identification of nine human-specific frameshift mutations by comparative analysis of the human and the chimpanzee genome sequences. Bioinformatics 21(Suppl 1) (2005) i186–i194 5. Grantham, R., Gautier, C., Gouy, M., Mercier, R., Pave, A.: Codon catalog usage and the genome hypothesis. Nucleic Acids Research (8) (1980) 49–62 6. Shepherd, J.C.: Method to determine the reading frame of a protein from the purine/pyrimidine genome sequence and its possible evolutionary justification. Proceedings National Academy Sciences USA (78) (1981) 1596–1600 7. Guigo, R.: DNA composition, codon usage and exon prediction. Nucleic protein databases (1999) 53–80 8. Leluk, J.: A new algorithm for analysis of the homology in protein primary structure. Computers and Chemistry 22(1) (1998) 123–131 9. Leluk, J.: A non-statistical approach to protein mutational variability. BioSystems 56(2-3) (2000) 83–93 10. Altschul, S. et al.: Basic local alignment search tool. JMB 215(3) (1990) 403–410 11. Altschul, S. et al.: Gapped BLAST and PSI-BLAST: a new generation of protein database search programs. Nucleic Acids Res 25(17) (1997) 3389–3402 12. Pellegrini, M., Yeates, T.: Searching for Frameshift Evolutionary Relationships Between Protein Sequence Families. Proteins 37 (1999) 278–283 13. Arvestad, L.: Aligning coding DNA in the presence of frame-shift errors. Proceedings of the 8th Annual CPM Symposium 1264 (1997) 180–190 14. Arvestad, L.: Algorithms for biological sequence alignment. PhD thesis, Royal Institute of Technology, Stocholm, Numerical Analysis and Computer Science (2000) 15. Blake, R., Hess, S., Nicholson-Tuell, J.: The influence of nearest neighbors on the rate and pattern of spontaneous point mutations. JME 34(3) (1992) 189–200 16. Kosiol, C., Holmes, I., Goldman, N.: An Empirical Codon Model for Protein Sequence Evolution. Molecular Biology and Evolution 24(7) (2007) 1464 17. Pedersen, A., Jensen, J.: A dependent-rates model and an MCMC-based methodology for the maximum-likelihood analysis of sequences with overlapping reading frames. Molecular Biology and Evolution 18 (2001) 763–776 18. Lio, P., Goldman, N.: Models of Molecular Evolution and Phylogeny. Genome Research 8(12) (1998) 1233–1244 19. Altschul, S. et al.: The estimation of statistical parameters for local alignment score distributions. Nucleic Acids Research 29(2) (2001) 351–361 20. Olsen, R., Bundschuh, R., Hwa, T.: Rapid assessment of extremal statistics for gapped local alignment. ISMB (1999) 211–222 21. Delaye, L., DeLuna, A., Lazzcano, A., Becerra, A.: The origin of a novel gene through overprinting in Escherichia coli. BMC Evolutionary Biology 8 (2008) 31 22. Hubbard, T. *et al.*: Ensembl 2007. Nucleic Acids Res. **35** (2007) 23. Clamp, M. *et al.*: Distinguishing protein-coding and noncoding genes in the human genome. Proc Natl Acad Sci **104**(49) (2007) 19428–19433 24. Oostra, B., Chiurazzi, P.: The fragile X gene and its function. Clinical genetics **60**(6) (2001) 399
Mr. Taylor said he would like to point out that this is a continuation of the third or fourth meeting on the matter of the provision of the State law pertaining to the structure of the Board of Education, and, in particular, vacancies on it. He said, in September, Council made certain decisions as to what they would like to see in a revision of the State ordinance, which included that a vacancy appointment would run for the remainder of the term of that particular seat for up to four years. He clarified, in other words, it would not require the appointee to run in an interim election; for example, there will be an interim election next year, and there was an appointee this past summer, so, as things stand now, that person will have to run next fall in that interim election. He said Council also made some other changes, which he can go through if they want him to. He said they did not fully resolve the situation with either doing away with or changing the Nominating Commission, but that was discussed at the last two meetings. He said, since it has been awhile since the initial presentation on this, he will point out that there are four Counties of the 24 in Maryland, if he includes Baltimore City, that have a Nominating Commission, and Wicomico is one of the four, but next year at this time there will only be three because Anne Arundel County, which now has a Nominating Commission, will cease to exist this time next year per State law. Mr. Cannon said there are members of the Nominating Commission in the audience, and he invites them to come before Council, and he appreciates each of them taking the time to be there. Mr. Ben Brumbley, Chair of the Nominating Commission; Mrs. Mary Ashanti, President of Wicomico County NAACP; Mr. Charles Gray, Commission Member representing the Town of Fruitland and the Fruitland Chamber of Commerce; and Ms. Chrystianna Gosnell, Commission Member, came before Council. Mrs. Ashanti pointed out that the Wicomico County NAACP is one of the entities listed in Senate Bill 145. Mr. Cannon said he appreciates them all being there. He clarified, he does not think the reason Council went in this particular direction was just off the cuff where they just decided to change something with the Commission, and creating more work was the last thing they really wanted to do. He said he thinks there was a legitimate concern as to the vacancies that were occurring, and whether or not substantial measures were in place to address vacancies when people were just not showing up, and whether or not that impacted the fact of whether or not they had a quorum, so Council was concerned about that. He said he thinks the concern was whether or not, in the name of diversity, they sort of created an insurmountable goal of trying to get so many people to the table at one time, and it might not be something that is physically possible, so the thought was that maybe they need to downsize. He said, in the interim, as Mr. Taylor mentioned earlier, Council found out that, even though Wicomico County had formed a Commission, there were a lot of Counties that were not using Commissions, so a second thought was whether they were going down the right road at all. He said this is open for discussion, as they said before, and Mr. McCain made some very good points at the last Work Session, such as maybe they have not given this enough time, and maybe they need to continue on as they are and not be so quick to make adjustments. He said, then again, they still want to make sure they are listening to the Commission members, and that Council looks into any substantial changes that might need to be made. He said there was the question of whether the number of Commission members was too large, and he has always asked if they are requiring too many Public Hearings where the public is not really attending at all, and it just created sort of a nuisance for everybody as opposed to any benefit, but he will leave it to the Commission members as to whatever input they might like to add. Mr. McCain said, just to repeat some of the comments from the last meeting, in their packet they keep referring back to the September 3 meeting, but they have talked about this twice since then, and there is not really an updated version of this. He said he felt at the last meeting they sort of had a consensus that certainly they felt that eliminating the Commission was not a path they were interested in. He said, whether they are going to make any changes to the Commission in terms of the makeup, going back to the history of the makeup of the Commission, there was a lot of input when it was formed to make sure there was community representation, and that there was diverse representation on the Commission. He said, going back to the statements from the last meeting, he, for one, did not feel necessarily this was the best test because this was the first time they actually tested the process, and, for one thing, they only had two candidates, which kind of meant there was not much from the Commission’s standpoint because they were supposed to recommend two, but they only had two. He clarified, from that standpoint, when someone volunteers to serve on a Commission, they certainly expect them to fulfill their duty, but, at the same time, there was not necessarily a whole lot of motivation there when the work was kind of done for them because there were only two candidates, and they were supposed to recommend two. He clarified, with all of that said, he feels the process worked. He said he thinks the bigger question for Council to actually address now is the issue of whether they serve out the remainder of the term, which he certainly thinks they should once they get appointed. He said the situation they have right now is a little unusual where they get appointed and then have to run in the next election, which he is not so sure it was intended to be the next election or the next County election, but the way it is worded is the next election. He said the person appointed is now out of cycle, and has to run next year, and, if elected, has to run again in two more years, and, obviously, that just does not seem to make logical sense. He said he, for one, is not real enthused about Council necessarily trying to tinker with the make-up of the Commission because a lot went into making sure those different aspects of the community were represented, so trying to pick and choose which ones to eliminate, or which ones to put back in he thinks disagrees with the intent of the original makeup of the Commission. Mr. Brumbley said he would like to make some clarifications. He said this was the second time the Commission met for the appointment of someone after Mr. Goslee passed away. He said the first time the Commission met they had to submit six names, and they had 13 candidates who applied. He said, in the Public Hearings, they had questions and answers. He said they did not have the attendance they would have liked to have had, but they had a decent attendance, and there was time for the public to ask the candidates questions. He clarified, they limited what kind of questions, so they could not get into their personal life, but more about their commitment to the students of Wicomico County. Mr. McCain said that is a good thing because the whole purpose of that was to give the public the opportunity. Mr. Brumbley said the last time the Commission met it was only for District 3 and could not be the entire County because Mr. Goslee was elected from District 3, and was representing District 3, so that limited how many people could apply. He said they had some issues with advertising, and it seemed that the different advertising venues did not respond to the Commission and help them out like they did the first time. He said the Commission asks that, if Council changes things, they present some type of a budget for secretarial issues, and also for advertising. He said he thinks this Commission did a fabulous job and gave up a lot of time, but, from what he has heard from members of the Commission, and the public, is they almost feel a little bit unappreciated for the action that has been taken. He said he knows people hear things and add to it, but the thought of doing away with the Commission, in his opinion, is taking away the voice of the people. He said he does not care if they are the only County in the State of Maryland that has a Commission because they are now giving the citizens of Wicomico County a voice in a process they elected to have with an elected School Board, and for the Council or the County Executive to go forward with filling a vacancy, he thinks, is a slap in the face of the citizens of Wicomico County. Mr. Dodd said he thinks at the last meeting they established that the secretarial staff from the Board of Education would assist the Nominating Commission. He then asked Mr. Brumbley where this budget would come from, to which Mr. Brumbley responded, that is Council’s job. Mr. Dodd asked if it would come from the Board of Education, to which Mr. Cannon responded, he thinks it would come from the Board of Education, and that was his understanding from the last meeting. Mrs. Ashanti added, the budget is not the Commission’s role. Mr. Dodd said they need to establish that at some point. Ms. Gosnell said they are not talking about a huge budget, but advertising was the biggest issue, and they worked around a lot of things with postage, and things like that. She said, when she sent out press releases, she was told it would be put out, but it just was not given the time, and it was not as important the second time around as it was the first time around. Mr. Dodd said it would be a good idea to find out what kind of budget the Commission needs so they can discuss it with the Board of Education. Mr. Gray said he thinks the timeframe is something that needs to be addressed because, during this last process, the Commission was really jammed up for a length of time to get everything done. He said, to him, the advertising is a crucial part of this because he thinks there is a lot of misconception in what the actual Board Member is supposed to do, for one thing, and he thinks that information needs to get out there at the same time they notify the public of the opening. He said he thinks there then needs to be an adequate number of ads with the given amount of time for people to respond, and he believes one of the objectives should be to get a number of qualified candidates to come forward. He said this last time around they had two applicants, and presented them both, so it made their job a little bit easier, but he does not know if that really accomplished what the County would like to have for filling these positions. Mrs. Ashanti said she would like to correct something she said at the last meeting because she has since talked to Mr. Brumbley, and the African American community was well represented on the Commission, so she wants to correct that. She said she reviewed the advice memorandum dated September 8 from Mr. Taylor, and they do not support any of his recommendations on that memorandum, and she just wants that on the record. Mr. McCain said he was getting at that earlier, and he is not really sure why they are still referring back to that because Council has met twice since then, and, certainly, has moved past several of those items, yet they keep going back to that first meeting they had. Mr. Holloway said one of the issues they talked about is the attendance of the Commission members, and it was brought up that there was not a quorum a number of times, to which Mr. Brumbley clarified, one time. Mr. Holloway then asked if they could operate without a quorum. He said, if only seven people show up, they cannot operate, so how can they alleviate that problem? Mr. Brumbley said he believes Council goes by Robert’s Rules of Order, to which Mr. Holloway responded, they do not. Mr. Cannon clarified, they do informally, to which Mr. Holloway responded, they have never adopted them. Mr. Brumbley said, serving on the Maryland State PTA, they followed Robert’s Rules of Order, which says a quorum is one more than half of a Committee, so that is how they operated. He said he does not know if the Board of Education officially adopted Robert’s Rules of Order, but they follow it to some extent, so that is why, as Chair, he tries to follow it so if anybody has any issues they have some kind of a standard they are following. Mr. McCain said, just to clarify, Mr. Brumbley said there was only one time where they did not have a quorum, but he has heard the comment repeated several times that they could not get a quorum, when actually they only had that one occasion, so that was not necessarily a big problem. Mr. Cannon clarified, similar to what Mr. Gray said, it was a big problem because of the time restraints, to which Mr. Gray responded, they probably were fortunate because the same people ended up showing up every time, so, basically, they had a core group who pretty much made all of the scheduled meetings. He said, if they are going to have 14 people, he thinks they ought to have a commitment from 14 people because who knows the input they might have been able to get if they had 14 people contribute. He said either that, or they go the other way and limit the number of people to 7. Mr. Holloway said he will ask his question again, but will probably get the same answer. He then asked, if they have not adopted Robert’s Rules of Order, do they have to have a quorum to operate, to which Mr. Cannon responded he thinks they need to establish something. Mrs. Ashanti said they have to have a quorum to operate, but, like Dr. Boyd said, that was not done, and going forward it needs to be done. She said, when they have members who are not attending, that is where the Commission could ask the secretary to send a letter to Mr. Culver recommending that the person be replaced. She said that has not been done in the past, but, going forward, it could be done. She said she has been involved in many executive committees over the years, and when there were not enough people, they conducted business, but knew they had to have it ratified, even if that meant having a conference call or a special meeting where they could get everybody on the phone and go over everything and have it ratified. She said she thinks sometimes they make things too difficult. Mr. Holloway asked if an alternative would be to cut it back to nine members with five alternate members. He said they could ask who would be at upcoming meetings, and, if someone could not come, they could call an alternate. He then asked if that is an idea, to which Ms. Gosnell responded, it could be, but they had some people not respond at all who were on the Commission, and she kept an attendance record, but some did not even respond to emails about the meetings. Mr. Holloway asked if it would be better to cut the Commission back to nine full-time members and five alternates, and when they put out emails and do not get a response they could then call an alternate, to which Mr. McCain responded, the only complexity with that is it is not just members. He clarified, each of those people is designated to represent a certain aspect of the community, so then they get into which five aspects would they cut. Ms. Gosnell said maybe they should have let those organizations or municipalities know that their person did not show up. Mr. Cannon said, from what has been shared with Council today, it looks like they need to address four issues, and one issue is to adjust the timeframe to give them more time to address their concerns. He said replacing a Commission member is a time consuming process, but by the time they are replaced, they could be beyond the deadline, so they have to look at recreating the deadline. He said the second issue might be strengthening the rules in reference to absenteeism, and they cannot give them too many strikes because, again, they will be out of time. He said the third issue is making sure they substantially establish support staff with a secretary and funding for advertising. He said the fourth issue was touched on by Mr. McCain, which is to possibly redefine the next election, and he thinks that is a good point because they had to go between attorneys two or three times to define what exactly was the most appropriate time for this individual, once appointed, to then have to run for election. He concluded, those are the four issues that he sees, but he does not know if there is anything else. Mr. Gray said background checks were an issue. Mr. Cannon asked if he means as far as the time, or not having the staff to do them, to which Mr. Gray responded, it was an issue of who would be responsible for paying for a background check. Mr. Holloway said that falls into the budget part of it. Mr. Brumbley said, if Council is going to submit a rewritten Bill to the Legislature, he thinks they should look at two things. He said, in this last process, Council had a tie vote, and there is nothing written as to what happens in the case of Council having a tie vote on the candidates they want to appoint, so he thinks something needs to be set in stone as to what is to be done if that happens. He said they could start the process all over again, so the applicants who applied and were put forward would have to reapply, but he thinks that is something they need to look at because that was an issue this last time. He clarified, it was not an issue the first time, but it was this last time. He said the other issue is who pays for background checks, and how many they do. He said the first time they had to send six names forward, but it could have been a little larger if someone failed the background check, or had an issue. He said it was also brought up about the privacy issue of the background check, and who would get to see what the issue was, or why that person would not be put forward because of a background check. He said those are the kinds of issues they saw, and he thinks the things that need to be written out are who pays for it, how many they pay for, and do all the names sent to Council have to have a background check. He said that is the type of thing he thinks needs to be worked out going forward, as well as what should happen if Council has a tie like the last time. Mr. McCain said they can incorporate the background check aspect into the budgetary aspect because he thinks those are all part of the same bucket. He said a fifth item to add to the list is the situation with the tie, if they want specific language. He said that is always going to be a reality because, the way the law reads, it is the remaining Council, so in their Council situation, they could have a tie because it is always the remaining Councilmembers. Mr. Cannon said he actually had this issue on his list, but did not bring it up yet. Mr. Brumbley said they have not had an issue with the Commission having a tie because, actually, in Robert’s Rules of Order, the President or Chair does not vote unless there is a tie, and that is when they vote. He clarified, he meant the Council had an issue of a tie, so that is what needs to be addressed. Mr. Brumbley said he thinks they have to be careful about the timeline because they are talking about reducing a person, or however many seats on the Board they are filling a vacancy for, to make sure their work goes forward, so it really is tight, but they are also talking about making decisions for their children. He said by the time the Commission does 60 days and Council does 60 days, they are talking about 120 days with an empty seat on the Board, and that also throws their numbers off. Mr. Gray said the hearings are what add a lot to that. He clarified, trying to squeeze them into that timeline and advertising them, and he does not know whether Council can narrow that down to just one, to which Mr. Cannon responded, that would be his recommendation. Mr. Gray said he thinks they have to give at least one, but two is pushing it. Mr. Dodd said this time was a rare situation, and hopefully it will never happen again where they were hit with a double whammy when they were down one Councilmember and they were down a Board of Education member, so that was why they ended up with the tie. He said, with that being said, they need a plan B, and they need to move forward with that. He suggested they have one or two more Work Sessions to work out the Commission’s concerns. Mr. Taylor said he would like to add something that was an open item from the last meeting. He said one member of the Commission is to be a member from the Wicomico Council of PTAs. He said, at the last meeting, it was mentioned that this organization either does not exist anymore or is defunct, so that would be, he thinks, an open item to possibly revise no matter how they do this, as long as they retain the Commission. He said he would also like to make a clarification. He explained, there was mention of his memorandum of September 8, but that is not recommendations by him. He clarified, it states right at the top that the following is based upon his understanding of Council’s comments at the Work Session on September 3, so it was a synopsis, in effect, of what Council discussed at their meeting on the 3rd. Mr. Brumbley said, at the present time, there is not a Council of PTAs in Wicomico County, but that does not mean there will not be one in the future. He said there has been talk about revising that group, so he thinks it would be a little preliminary to take it out prior to something happening. Mr. Taylor said right now it is defunct, so he thinks it kind of leaves an open question, and they do not know what is going to happen. Mr. McCain said the PTAs still exists, just not the Wicomico Council of PTAs, so the language could be changed to say someone from the PTA as opposed to the actual Council. Mr. Dodd asked who would choose that representative out of all the schools, to which Mr. McCain responded, it would be just like it is now where they would have to be appointed by the Executive. Mrs. Ashanti said Council was talking about how people who are appointed may have to run for Office in November. She then asked, when Council appoints someone to the County Council, do they have to run in the next election, to which Mr. Cannon responded, yes. Mrs. Ashanti said this is the same difference, and she does not see the difference just because they are appointed. She said, if they want to remain in that position in the next election, and they want to continue in their position, they need to run for their position, so she does not see that as a problem. She said she also has a question in reference to background checks. She said, whenever she has gone for a position, she has had to pay for her background check, and she is responsible for that because she is going for that position, so she is responsible, and the background check goes to whoever has the job she is applying for. She said she is just throwing that out there, and they are neither for or against that. She said also they are not against having just one Public Hearing instead of two, but it really would have to be widely publicized, and not just white media because there are other non-white media outlets out there, and they need to get in the habit of using them if they want to be one Wicomico. She said, basically, that is what her concerns are. She said instead of waiting until they have another problem or have to replace someone, if they already know and have a record that people are not attending, they should send a letter to the County Executive saying these are the people who have not been attending, and ask if he can please contact that particular organization and appoint someone to replace them. She said they do not need to wait until they have a problem and then try to address it because now is the time to do that. Ms. Gosnell said she can provide a list of attendance, to which Mr. Cannon responded, if she could provide that to Mrs. Hurley, that would be great. Ms. Gosnell then asked if Mrs. Ashanti could also provide her with a list of suggested media so they can put that in their file. Mr. Taylor said he will add that he noticed one County, and he cannot remember offhand which one, says the nominees have to be run through the Case Search system, which is provided by the State and shows court records, to which Ms. Gosnell responded, they did that. Mr. Taylor said another way to do it, of course, is to provide a statement under oath regarding their background, and that might truncate some of the costs. He said, obviously, if they want to do a full formal FBI investigation, it is going to be more costly, so they need to look at that, to which Mr. Cannon responded, that is a good point. Ms. Gosnell said they used the judiciary system, and in the interview each candidate was asked if they understood they had to run for their position at the time of the election. Mr. Cannon said Case Search is a pretty thorough background, and they are not doing credit checks, but more or less criminal and civil checks. Mr. Dodd said candidates are not run through a background check during a regular election, so he thinks they can do something simple, to which Ms. Gosnell responded, they actually did both. Ms. Gosnell said it was a pleasure working with the Commission, and she was afraid Council was looking to disband them. She said it started out with a lot of different personalities, but by the end they became a close little group. Mr. Holloway said Council appreciates what they are doing. Mr. McCain said he thinks he is speaking for all of Council, but they appreciate what the Commission has done, and their willingness to serve on the Commission. He said the comment was made that some of the affected people did not feel appreciated, but Council appreciates what they all did, and this discussion just evolved out of some of the questions that were coming up as part of the process, and by no means was coming out of the fact that no one appreciated the work because the work was very much appreciated. He clarified, that is not just for Council, but he is sure he is speaking for everyone in the County. Mr. Holloway said, if they have the unfortunate opportunity to need the Commission’s services again, in some cases they can see it coming, and in other cases it just pops up, but possibly they could get a letter stating that if some of the challenges the Commission had have not been resolved by then, maybe Council can help them get through it quicker at that time and solve some of the problems as it goes along, and help them contact people. Mr. Dodd said Council appreciates everything the Commission has done, and this is not a goodbye, but that is what it sounds like. He said he thinks at the last meeting they discussed that the Nominating Commission should have regular meetings to keep updated whether there is a vacancy or not. There was no further discussion. John T. Cannon, President Larry W. Dodd, Vice President, District 3 Ernest F. Davis, District 1 absent Nicole Acle, District 2 absent Josh Hastings, District 4 Joe Holloway, District 5 William R. McCain, At-Large Laura Hurley, Council Administrator
Field-Enhanced Photocurrent Spectroscopy of Excitonic States in Single-Wall Carbon Nanotubes Aditya Mohite,† Ji-Tzuoh Lin,† Gamini Sumanasekera,‡ and Bruce W. Alphenaar*† Department of Electrical and Computer Engineering and Department of Physics, University of Louisville, Louisville, Kentucky 40292 Received February 13, 2006; Revised Manuscript Received June 1, 2006 ABSTRACT Excitonic and free-carrier transitions in single-wall carbon nanotubes are distinguished using field-enhanced photocurrent spectroscopy. Electric field dissociation allows for the detection of bound-exciton states that otherwise would not contribute to the photocurrent. Excitonic states associated with both the ground-state semiconductor and the ground-state metallic nanotube transitions are resolved. The observation of a metallic excitonic state corroborates recent predictions of a symmetry gap existing in metallic nanotubes. Optical spectroscopy is now an established technique for probing single-wall nanotube (SWNT) properties and for exploring the potential of SWNTs for optoelectronic applications. The SWNT optical absorbance spectrum has frequently been described using a noninteracting model in which optical excitation across pairs of van Hove spikes in the electron density of states creates free electron–hole pairs. Prominent peaks in the absorbance spectrum of SWNT films are ascribed to the two lowest energy optical transitions for semiconducting nanotubes ($E_{11}^S$ and $E_{22}^S$) and to the lowest energy transition for metallic nanotubes ($E_{11}^M$). It has been persuasively argued, however, that the presence of strong Coulombic interactions should make exciton formation the dominant optical absorption mechanism in SWNTs. In fact, recent experimental work has conclusively demonstrated that optical absorption in SWNTs occurs primarily through the creation of bound excitons, rather than through the creation of free electron–hole pairs. This raises important issues on the use of carbon nanotubes for photodetectors, and on the nature of carbon nanotube photoconductivity. Because optical excitations in SWNTs create strongly bound electron–hole pairs, this should block the generation of free carriers and limit the sensitivity of the SWNT photocurrent response. In recent SWNT photocurrent measurements performed by Freitag et al., optically generated excitons are thought to decay to lower energy continuum states, where they can then contribute to the observed photocurrent. Such a relaxation process, while postulated for excitons associated with the $E_{22}^S$ transition, should not be possible for the ground-state $E_{11}^S$ transition. No photocurrent measurements have yet been reported, however, for the lower energy regime. In this paper, electric-field-dependent photocurrent measurements of a SWNT capacitor are used to distinguish between free-carrier and bound-excitonic transitions in the SWNT excitation spectrum. Near the $E_{11}^S$ transition, both excitonic and free-carrier transitions are resolvable with an exciton binding energy of 110 meV. Near the $E_{22}^S$ transition, only a single field-independent peak in the photocurrent spectrum is observed, indicating (in agreement with Freitag et al.) a fast decay of the exciton into the lower energy free-carrier states. Surprisingly, an exciton resonance associated with metallic nanotubes is also resolved. This can be explained by recent theory that shows that in metallic nanotubes optical transitions between the overlapping states at the Fermi energy are disallowed, giving rise to a symmetry gap. To probe the SWNT photoexcitation spectrum, we use a recently described displacement photocurrent spectroscopy technique in which the SWNT film under study acts as one plate of a parallel plate capacitor. This allows for relatively large electric fields to be placed across the nanotubes without producing any appreciable dark current. Our measurement setup is shown in Figure 1a. CVD-grown SWNTs are dispersed onto a 100-$\mu$m-thick quartz slide to create a uniform film of nanotubes. TEM and Raman analysis reveals a narrow distribution of SWNT diameters, with an average diameter of 1.3 nm. A 30 nm layer of ITO is deposited by electron-beam evaporation to form a transparent top contact to the nanotube film, while the backside of the... Figure 1. (a) Diagram illustrating the test device structure. The carbon nanotubes lie parallel to the sample surface. The displacement photocurrent is measured by amplifying the out-of-phase signal generated by pulsed laser light incident on the SWNT/dielectric/metal capacitor. (b) Band diagram showing the proposed photocurrent generation mechanism. Shown are the free-carrier ($E_c$) and bound-exciton ($E_{ex}$) transition energies, along with the exciton binding energy ($E_b$). The slide is anchored to a grounded copper block inside of an optical flow cryostat. This creates a capacitor in which the nanotube film is coupled capacitively to ground through the quartz dielectric. Pulsed laser light incident on the film surface produces displacement current across the capacitor, which can be measured with a lock-in current amplifier. Simultaneous to the displacement current, we also measure the absorbance spectrum by detecting the percentage of incident light transmitted through the nanotube film via a hole in the copper block. A dc voltage, $V_{dc}$, applied to the ITO film is used to create a variable electric field across the device. Our optical excitation source is a Spectra Physics optical parametric amplifier (OPA) pumped by a 130 fs pulsed Ti: Sapphire regenerative amplifier with a repetition rate of 1 kHz. The excitation photon energy is tuned between 0.4 and 4 eV, and the incident power is kept constant at 25 mW. The carrier generation mechanism in the SWNT film can be understood using the band diagram shown in Figure 1b. Here, the free-carrier ($E_c$) and bound-exciton ($E_{ex}$) transition energies are indicated for an individual nanotube within the ITO/SWNT/dielectric capacitor. A built-in potential, $V_0$, exists at the SWNT/ITO interface because of the difference in work functions between the SWNT and ITO and the particular distribution of trapped charge existing at the interface. The bias, $V_{dc}$, applied across the capacitor can be used to vary the magnitude of the electric field and, hence, the band bending at the ITO/SWNT interface. Under illumination, photon absorption results in the excitation of an electron from the ground state to form an electron–hole pair in the nanotube film. If the excited charge carriers are free to move, then the band bending at the SWNT/ITO interface will result in separation of the positive and negative charge, and a measurable displacement current across the capacitor. If, however, the photoexcited carriers form a bound-exciton state, then no displacement current will be measured unless the exciton first dissociates into available free-carrier states. Two main dissociation processes are considered: (a) exciton decay into a lower energy state or (b) exciton separation through Fowler–Nordheim tunneling into neighboring states. The Fowler–Nordheim tunneling process is strongly dependent on electric field, so we expect to see a strong field dependence of the photocurrent at incident photon energies corresponding to the bound-exciton ground-state energy. Much weaker field dependence is expected at photon energies corresponding to the free-carrier transition energy, or in the case where a decay path to lower energy free-carrier states is available. In contrast with the photocurrent spectrum, the absorbance spectrum should show only weak electric field dependence, with no clear distinction between free-carrier and bound-excitonic transitions. Figure 2 shows the absorbance and displacement photocurrent spectra in the energy regime of the $E_{11}^S$ transition for applied biases between 0 and 32 V. The absorbance spectrum (Figure 2a) shows a single peak at excitation energy of 0.62 eV; there is no noticeable bias dependence in either the position or magnitude of the peak. By contrast, the displacement photocurrent spectrum (Figure 2b) shows a clear bias... dependence. At 0 V, a single photocurrent peak at 0.73 eV is observed, while for higher bias a second peak appears at 0.62 eV, corresponding to the absorbance peak energy. This lower energy photocurrent peak increases in magnitude with increasing bias, until it dominates the higher energy peak. Figure 3 shows the corresponding set of bias-dependent measurements performed in the regime of the $E_{22}^S$ transition. In this case, a single peak is observed in both the absorbance (Figure 3a) and photocurrent (Figure 3b) spectra at 1.21 eV. The magnitude and position of both absorbance and photocurrent peaks are independent of bias, and there is no splitting in the photocurrent peak as observed with the $E_{11}^S$ transition. As discussed above, the dominant peaks observed in the SWNT absorbance spectra have been shown to be due to the formation of excitonic states. The observed absorbance peaks can thus be assigned to the ground ($E_{11}^S$) and next highest energy ($E_{22}^S$) excitonic transitions in the semiconductor nanotubes (Figures 2a and 3a, respectively). For the photocurrent spectrum in Figure 3b, the peak response matches that for the absorbance peak, and thus also appears to be attributable to the same $E_{22}^S$ excitonic transition. The fact that there is no bias dependence in the $E_{22}^S$ photocurrent peak suggests that the exciton is able to dissociate into a free electron hole pair without requiring the input of any additional energy. This is in agreement with the photocurrent measurements reported in ref 9. It appears that the availability of lower energy free-carrier states provides a direct pathway for disassociation of the second-order bound-exciton state. Of greater interest is the photocurrent spectrum for the $E_{11}^S$ transition, where the exciton peak does not appear in the photocurrent until a finite bias is applied. Similar behavior has been reported in photocurrent measurements of 1D polymer chains; in the polymer case, exciton dissociation has been shown to occur through field-enhanced tunneling into adjacent free-carrier states.\textsuperscript{15} At high fields (approximated by the binding energy divided by the exciton radius, or $E_b/r$), the bound state is destroyed. At intermediate fields, the barrier to field ionization is not surmounted, but the carriers can still dissociate by tunneling. An analogous picture can be used to describe the nanotube system. The maximum electric field in the nanotube film is approximately $F_{\text{nt}} = F_q(\epsilon_q/\epsilon_{\text{nt}})$, where $F_q$ is the electric field across the quartz, and $\epsilon_q$ and $\epsilon_{\text{nt}}$ are the dielectric constants in the quartz and nanotube films, respectively. Taking $F_q$ to be approximately $V_{\text{dc}}/d$, where the quartz thickness $d = 100 \mu m$, $V_{\text{dc}} = 32$ V, $\epsilon_q = 3.8$, and $\epsilon_{\text{nt}} = 7$ gives $F_{\text{nt}} = 1.7 \times 10^5$ V/m. This field is not large enough for complete annihilation of the exciton, but as depicted in Figure 1b, dissociation of the excitons in the SWNT can still occur via Fowler–Nordheim tunneling across the potential barrier formed by the exciton binding energy, $E_b$. The photocurrent, $I_p$, is then proportional to $$I_p \propto \exp\left[-\frac{4}{3}\frac{\sqrt{2m^*}}{q\hbar}\frac{E_b^{3/2}t}{(V_0 + \gamma V_{\text{dc}})}\right]$$ where $\gamma V_{\text{dc}}$ is the fraction of the applied voltage, $V_{\text{dc}}$, that drops across the nanotubes, and $t$ is the thickness of the nanotube film. If we take $I_0$ to be the photocurrent observed with zero applied voltage, then we obtain $$I_p/I_0 = \exp\left[\frac{4}{3}\frac{\sqrt{2m^*}}{q\hbar}\frac{E_b^{3/2}t}{V_0}\frac{\gamma V_{\text{dc}}}{(V_0 + \gamma V_{\text{dc}})}\right] \equiv \exp\left[\frac{a}{(1 + b/V_{\text{dc}})}\right]$$ which provides an expression for the photocurrent having only two fitting parameters $$a = \frac{4}{3}\frac{\sqrt{2m^*}}{q\hbar}\frac{E_b^{3/2}t\gamma}{V_0} \quad \text{and} \quad b = V_0/\gamma$$ Figure 4a shows $\ln(I_p/I_0)$ plotted versus $V_{\text{dc}}$ for the 0.62 eV photocurrent peak, together with a fit to eq 1 for fitting parameters $a_S = 1.60$ and $b_S = 2.69$ V (where the subscript S refers to the semiconducting transition). Clearly, the photocurrent exciton peak is described well by the field-enhanced tunneling model. We now consider the higher energy photocurrent peak at 0.73 eV in Figure 2b. The lack of bias dependence indicates that free electron hole pairs are formed at this excitation energy and implies that this peak is due to direct optical excitation into SWNT free-carrier states. At low bias, the free-carrier transition is resolvable because the finite field Figure 4. Normalized photocurrent versus applied bias for (a) the $E_{11}^S$ semiconductor exciton transition and (b) the $E_{11}^M$ metallic exciton transition. The solid black circles are the experimental data points, and the dashed lines are fits to eq 1 using $a_S = 1.60$, $a_M = 0.48$, and $b_S = b_M = 2.69$ V. required for dissociation to occur masks the excitonic transition. At high bias, the excitonic transition dominates; this dominance is also observed in the absorbance spectrum, where there is little if any indication of the higher energy peak. We can estimate the $E_{11}^S$ exciton binding energy by taking the difference between the energy of the free-carrier transition and the excitonic transition, to give $E_b = 110$ meV. This agrees with theoretical predictions for the binding energy, assuming a nanotube diameter of 1.3 nm and dielectric constant of $\epsilon_{\text{nt}} = 7$. We note that because we measure a film of nanotubes the high-energy photocurrent peak could potentially be due to absorption within a lower-than-average-diameter SWNT population. However, this implies that there should be some evidence for this nanotube population in the $E_{11}^S$ absorbance spectrum and in the $E_{22}^S$ absorbance/photocurrent spectra. No extra peaks are observed in these spectra, however. Absorption into the continuum states is thought to be extremely weak compared to absorption into the excitonic states. It might be expected then that the excitonic photocurrent peak would completely dominate the spectrum at high bias; however, this is not observed. This is most likely because of the reduced detection efficiency of the photo-excited excitons compared to the photoexcited free carriers. Only a small percentage of the excitons dissociate by tunneling, and only a fraction of these reach the ITO contact before recombination occurs. Because of this, the magnitudes of the free-carrier and excitonic peaks do not directly correspond to the relative absorption between the two states. The width of the excitonic photocurrent peak is also not identical to the width of the absorption peak, even though both peaks are thought to be due to absorption into an excitonic state. The peak widths are in part determined by the diameter distribution of the contributing nanotubes. On average, increasing the number of nanotubes increases the diameter distribution, and in turn produces a wider photo-excitation peak. As described above, a much larger number of excitons are produced through light absorption than are captured as photocurrent, and, hence, a much larger number of nanotubes contribute to the absorption peak than to the photocurrent peak. This then implies that the absorption peak should be wider than the photocurrent peak (as is observed). Figure 5 shows the (a) absorbance and (b) displacement photocurrent spectra for the high-energy regime near the metallic $E_{11}^M$ transition. The results are similar to those observed for the $E_{11}^S$ transition. A single, bias-independent peak is observed in the absorbance spectrum at 1.81 eV, whereas two main peaks are observed in the photocurrent spectrum: a bias-independent peak at 1.86 eV and a bias-dependent peak at the absorption peak energy of 1.81 eV. As in the $E_{11}^S$ case, the bias-dependent 1.81 eV peak can be attributed to a bound-exciton state. Although it is counter-intuitive to consider bound excitons existing in metallic systems, Spataru et al. have in fact predicted the existence of bound-excitonic states for metallic nanotubes. In $(n,n)$ metallic nanotubes, there is a crossover between two sets of conducting states at the Fermi energy. However, each set of states has different symmetry so that optical transitions between the two sets of states are suppressed. This symmetry gap allows for the formation of bound-exciton states having finite lifetime even in metallic nanotubes. If we assign the 1.86 eV peak to the metallic free-carrier transition, we can estimate the exciton binding energy to be 50 meV, or somewhat less than half of the value obtained for the semiconducting transition. In Figure 4b, $\ln(I_p/I_0)$ is plotted versus $V_{dc}$ for the 1.81 eV photocurrent peak. Assuming that dissociation of the metallic bound exciton also occurs by field-assisted tunneling, it should be possible to describe these data with our tunneling model while incorporating fitting parameters that are consistent with those obtained for the $E_{S1}$ bound-exciton peak. The interface potential, $V_0$, should be unchanged from the semiconductor case, so that the $b$ parameter will be fixed, giving $b_M = b_S = 2.69$ V. The $a$ parameter will be modified only by the change in exciton binding energy, giving $a_M = a_S(b_M^2/E_b^M)^{3/2} = 0.48$. The dashed line in Figure 4b shows eq 1 plotted using these values for $a_M$ and $b_M$. The fit is clearly not as good as that in the semiconductor case; however, the model does predict the magnitude of the photocurrent accurately in the high voltage regime. The assumption that the interface potential remains fixed is possibly incorrect because of the changing charging conditions at the nanotube/contact interface. In conclusion, by comparing absorbance and bias-dependent photocurrent measurements, we are able to distinguish between free-carrier and bound-excitonic transitions in single-wall nanotubes. With this technique, we are able to demonstrate that field dissociation is generally necessary to observe photocurrent associated with the ground-state optical transition. We also provide the first evidence for excitonic states in metallic nanotubes. The method should be generally applicable to individual nanotubes and semiconducting nanowires. **Acknowledgment.** We thank R.W. Cohn and J. Kielkopf for valuable discussions. Funding was provided by ONR/NSF (No. ECS-0224114), ONR (No. N00014-06-1-0228), and NASA (No. NCC 5-571). **References** (1) *Carbon Nanotubes: Synthesis, Structure, Properties, and Applications*; Dresselhaus, M., Dresselhaus, G., Avouris, P., Eds.; Springer: Berlin, 2001. (2) Kataura, H.; Kumazawa, Y.; Maniwa, Y.; Umezu, I.; Suzuki, S.; Ohtsuka, Y.; Achiba, Y. *Synth. Met.* **1999**, *103*, 2555−2558. (3) Ando, T. *J. Phys. Soc. Jpn.* **1997**, *66*, 1066. (4) Avouris, Ph. *MRS Bull.* **2004**, *29*, 403. (5) Spataru, C. D.; Ismail-Beigi, S.; Benedict, L. X.; Louie, S. G. *Phys. Rev. Lett.* **2004**, *92*, 077402. (6) Korovyanik, O. J.; Sheng, C.-X.; Vardeny, Z. V.; Dalton, A. B.; Baughman, R. H. *Phys. Rev. Lett.* **2004**, *92*, 174303. (7) Wang, F.; Dukovic, G.; Brus, L. E.; Heinz, T. F. *Science* **2005**, *308*, 838. (8) Fujiwara, A.; Matsuoka, Y.; Suematsu, H.; Ogawa, N.; Miyano, K.; Kataura, H.; Maniwa, Y.; Suzuki, S.; Achiba, Y. *Jpn. J. Appl. Phys.* **2001**, *40*, L1229. (9) Freitag, M.; Martin, Y.; Misewich, J. A.; Martel, R.; Avouris, Ph. *Nano Lett.* **2003**, *3*, 1067. (10) Balasubramanian, K.; Fan, Y.; Burghard, M.; Kern, K.; Friedrich, M.; Wannek, U.; Mews, A. *Appl. Phys. Lett.* **2004**, *84*, 2400. (11) Perebeinos, V.; Tersoff, J.; Avouris, Ph. *Phys. Rev. Lett.* **2004**, *92*, 257402. (12) Mohite, A.; Chakraborty, S.; Gopinath, P.; Sumanasekera, G. U.; Alphenaar, B. W. *Appl. Phys Lett.* **2005**, *86*, 061114. (13) Mohite, A.; Sumanasekera, G. U.; Hirahara, K.; Bandow, S.; Iijima, S.; Alphenaar, B. W. *Chem. Phys. Lett.* **2005**, *412*, 190. (14) Vaddiraju, S.; Mohite, A.; Chin, A.; Meyyappan, M.; Sumanasekera, G. U.; Alphenaar, B. W.; Sunkara, M. K. *Nano Lett.* **2005**, *5*, 1625. (15) Moses, D.; Wang, J.; Heeger, A. J.; Kirova, N.; Brazovski, S. *PNAS* **2001**, *98*, 13496.
5. E. Bernstein and U. Vazirani, “Quantum complexity theory,” in *Proc. 25th ACM Symp. on Theory of Computing*, pp. 11–20 (1993). 6. A. Berthiaume and G. Brassard, “The quantum challenge to structural complexity theory,” in *Proc. 7th Conf. on Structure in Complexity Theory*, IEEE Computer Society Press, pp. 132–137 (1992). 7. A. Berthiaume and G. Brassard, “Oracle quantum computing,” in *Proc. Workshop on Physics and Computation*, pp. 195–199, IEEE Computer Society Press (1992). 8. D. Coppersmith, “An approximate Fourier transform useful in quantum factoring,” *IBM Research Report RC 19642* (1994). 9. D. Deutsch, “Quantum theory, the Church–Turing principle and the universal quantum computer,” *Proc. Roy. Soc. Lond. Ser. A*, Vol. 400, pp. 96–117 (1985). 10. D. Deutsch, “Quantum computational networks,” *Proc. Roy. Soc. Lond. Ser. A*, Vol. 425, pp. 73–90 (1989). 11. D. Deutsch and R. Jozsa, “Rapid solution of problems by quantum computation,” *Proc. Roy. Soc. Lond. Ser. A* Vol. 439, pp. 553–558 (1992). 12. D. P. DiVincenzo, “Two-bit gates are universal for quantum computation,” *Phys. Rev. A*, Vol. 51, pp. 1015–1022 (1995). 13. R. Feynman, “Simulating physics with computers,” *International Journal of Theoretical Physics*, Vol. 21, No. 6/7, pp. 467–488 (1982). 14. R. Feynman, “Quantum mechanical computers,” *Foundations of Physics*, Vol. 16, pp. 507–531 (1986). (Originally appeared in *Optics News*, February 1985.) 15. L. Fortnow and M. Sipser, “Are there interactive protocols for co-NP languages?” *Inform. Proc. Lett.* Vol. 28, pp. 249–251 (1988). 16. D. M. Gordon, “Discrete logarithms in GF(p) using the number field sieve,” *SIAM J. Discrete Math.* Vol. 6, pp. 124–139 (1993). 17. G. H. Hardy and E. M. Wright, *An Introduction to the Theory of Numbers, Fifth Edition*, Oxford University Press, New York (1979). 18. R. Landauer, “Is quantum mechanically coherent computation useful?” in *Proceedings of the Drexel-4 Symposium on Quantum Nonintegrability — Quantum Classical Correspondence* (D. H. Feng and B-L. Hu, eds.) International Press, to appear. 19. Y. Leclerf, “Machines de Turing réversibles. Réursive insolubilité en $n \in \mathbb{N}$ de l’équation $u = \theta^n u$, où $\theta$ est un isomorphisme de codes,” *Comptes Rendues de l’Académie Française des Sciences*, Vol. 257, pp. 2597-2600 (1963). 20. A. K. Lenstra and H. W. Lenstra, Jr., eds., *The Development of the Number Field Sieve*, Lecture Notes in Mathematics No. 1554, Springer-Verlag (1995); this book contains reprints of the articles that were critical in the development of the fastest known factoring algorithm. 21. H. W. Lenstra, Jr. and C. Pomerance, “A rigorous time bound for factoring integers,” *J. Amer. Math. Soc.* Vol. 5, pp. 483–516 (1992). 22. S. Lloyd, “A potentially realizable quantum computer,” *Science*, Vol. 261, pp. 1569–1571 (1993). 23. S. Lloyd, “Envisioning a quantum supercomputer,” *Science*, Vol. 263, p. 695 (1994). 24. G. L. Miller, “Riemann’s hypothesis and tests for primality,” *J. Comp. Sys. Sci.* Vol. 13, pp. 300–317 (1976). 25. S. Pohlig and M. Hellman, “An improved algorithm for computing discrete logarithms over GF(p) and its cryptographic significance,” *IEEE Trans. Information Theory*, Vol. 24, pp. 106–110 (1978). 26. C. Pomerance, “Fast, rigorous factorization and discrete logarithm algorithms,” in *Discrete Algorithms and Complexity (Proc. Japan-US Joint Seminar)*, pp. 119-143, Academic Press (1986). 27. R. L. Rivest, A. Shamir, and L. Adleman “A method of obtaining digital signatures and public-key cryptosystems,” *CommunicationsACM*, Vol. 21, No. 2, pp. 120–126 (1978). 28. A. Shamir, “IP = PSPACE,” in *Proc. 31th Ann. Symp. on Foundations of Computer Science*, pp. 11–15, IEEE Computer Society Press (1990). 29. D. Simon, “On the power of quantum computation,” in *Proc. 35th Ann. Symp. on Foundations of Computer Science*, pp. 116–123, IEEE Computer Society Press (1994). 30. W. G. Teich, K. Obermayer, and G. Mahler, “Structural basis of multistationary quantum systems II: Effective few-particle dynamics,” *Phys. Rev. B*, Vol. 37, pp. 8111–8121 (1988). 31. T. Toffoli, “Reversible computing,” in *Automata, Languages and Programming, Seventh Colloq.*, Lecture Notes in Computer Science No. 84 (J. W. De Bakker and J. van Leeuwen, eds.) pp. 632-644, Springer-Verlag (1980). 32. W. G. Unruh, “Maintaining coherence in quantum computers,” *Phys. Rev. A*, Vol. 51, pp. 992–997 (1995). 33. A. Yao, “Quantum circuit complexity,” in *Proc. 34th Ann. Symp. on Foundations of Computer Science*, pp. 352–361, IEEE Computer Society Press (1993). where this equation was obtained from Condition (7.34) by dividing by $q$. The first thing to notice is that the multiplier on $r$ is a fraction with denominator $p - 1$, since $q$ evenly divides $c(p - 1) - \{c(p - 1)\}_q$. Thus, we need only round $d/q$ off to the nearest multiple of $1/(p - 1)$ and divide $(\mod p - 1)$ by $$c' = \frac{c(p - 1) - \{c(p - 1)\}_q}{q}$$ (7.40) to find a candidate $r$. To show that this experiment need only be repeated a polynomial number of times to find the correct $r$ requires only a few more details. The problem is again that we cannot divide by a number which is not relatively prime to $p - 1$. For the general case of the discrete log algorithm, we do not know that all possible values of $c'$ are generated with reasonable likelihood; we only know this about one-tenth of them. This additional difficulty makes the next step harder than the corresponding step in the two previous algorithms. If we knew the remainder of $r$ modulo all prime powers dividing $p - 1$, we could use the Chinese remainder theorem to recover $r$ in polynomial time. We will only be able to find this remainder for primes larger than 20, but with a little extra work we will still be able to recover $r$. What we have is that each good $(c, d)$ pair is generated with probability at least $.137p/q > 1/16q$, and that at least a tenth of the possible $c$'s are in a good $(c, d)$ pair. From Eq. (7.40), it follows that these $c$'s are mapped from $c/q$ to $c'/(p - 1)$ by rounding to the nearest integer multiple of $1/(p - 1)$. Further, the good $c$'s are exactly those in which $c/q$ is close to $c'/(p - 1)$. Thus, each good $c$ corresponds with exactly one $c'$. We would like to show that for any prime power $p_i^{\alpha_i}$ dividing $p - 1$, a random good $c'$ is unlikely to contain $p_i$. If we are willing to accept a large constant for the algorithm, we can just ignore the prime powers under 20: if we know $r$ modulo all prime powers over 20, we can try all possible residues for primes under 20 with only a (large) constant factor increase in running time. Because at least one tenth of the $c$'s were in a good $(c, d)$ pair, at least one tenth of the $c'$'s are good. Thus, for a prime power $p_i^{\alpha_i}$, a random good $c'$ is divisible by $p_i^{\alpha_i}$ with probability at most $10/p_i^{\alpha_i}$. If we have $t$ good $c'$'s, the probability of having a prime power over 20 that divides all of them is therefore at most $$\sum_{\substack{p_i^{\alpha_i} > 20 \\ p_i^{\alpha_i} | p - 1}} \left(\frac{10}{p_i^{\alpha_i}}\right)^t,$$ (7.41) where the sum is over all prime powers greater than 20 that divide $p - 1$. This sum (over all integers $> 20$) converges for $t = 2$, and goes down by at least a factor of 2 for each further increase of $t$ by 1; thus for some large constant $t$ it is less than 1/2. Recall that each good $c'$ is obtained with probability at least $1/16q$ from any experiment. Since there are $q/10$ good $c'$'s, after $160t$ experiments, we are likely to obtain a sample of $t$ good $c'$'s chosen equally likely from all good $c'$'s. Thus, we will be able to find a set of $c'$'s such that all prime powers $p_i^{\alpha_i} > 20$ dividing $p - 1$ are relatively prime to at least one of these $c'$'s. For each prime $p_j$ less than 20, we thus have at most 20 possibilities for the residue modulo $p_j^{\alpha_j}$, where $\alpha_j$ is the exponent on prime $p_j$ in the prime factorization of $p - 1$. We can thus try all possibilities for residues modulo powers of primes less than 20: for each possibility we can calculate the corresponding $r$ using the Chinese remainder theorem, and then check to see whether it is the desired discrete logarithm. This algorithm does not use very many properties of $\mathbb{Z}_p$, so we can use the same algorithm to find discrete logarithms over other fields such as $\mathbb{Z}_{p^n}$. What we need is that we know the order of the generator, and that we can multiply and take inverses of elements in polynomial time. If one were to actually program this algorithm (which must wait until a quantum computer is built) there are many ways in which the efficiency could be increased over the efficiency shown in this paper. **Acknowledgements** I would like to thank Jeff Lagarias for finding and fixing a critical bug in the first version of the discrete log algorithm. I would also like to thank him, Charles Bennett, Gilles Brassard, Andrew Odlyzko, Dan Simon, Umesh Vazirani, as well as other correspondents too numerous to list, for productive discussions, for corrections to and improvements of early drafts of this paper, and for pointers to the literature. **References** 1. P. Benioff, “Quantum mechanical Hamiltonian models of Turing machines,” *J. Stat. Phys.* Vol. 29, pp. 515–546 (1982). 2. P. Benioff, “Quantum mechanical Hamiltonian models of Turing machines that dissipate no energy,” *Phys. Rev. Lett.* Vol. 48, pp. 1581–1585 (1982). 3. C. H. Bennett, “Logical reversibility of computation,” *IBM J. Res. Develop.* Vol. 17, pp. 525–532 (1973). 4. C. H. Bennett, E. Bernstein, G. Brassard and U. Vazirani, “Strengths and weaknesses of quantum computing,” manuscript (1994). Currently available through the World-Wide Web at URL http://vesta.physics.ucla.edu/7777/. Note that we now have two moduli to deal with, $p-1$ and $q$. While this makes keeping track of things more confusing, we will still be able to obtain $r$ using a algorithm similar to the easy case. The probability of observing a state $\ket{c,d,y}$ with $y \equiv g^k \pmod{p}$ is, almost as before, \[ \left| \frac{1}{(p-1)q} \sum_{a,b \atop a + rb \equiv k} \exp \left( \frac{2\pi i}{q} (ac + bd) \right) \right|^2 \] (7.28) where the sum is over all $(a,b)$ such that $a - rb \equiv k \pmod{p-1}$. We now use the relation \[ a = br + k - (p-1) \left\lfloor \frac{br+k}{p-1} \right\rfloor \] (7.29) and substitute in the above expression to obtain the amplitude \[ \frac{1}{(p-1)q} \sum_{b=0}^{p-2} \exp \left( \frac{2\pi i}{q} (brc + kc + bd - c(p-1) \left\lfloor \frac{br+k}{p-1} \right\rfloor) \right). \] (7.30) The absolute value of the square of this amplitude is the probability of observing the state $\ket{c,d,g^k \pmod{p}}$. We will now analyze this expression. First, a factor of $\exp(2\pi i kc/q)$ can be taken out of all the terms and ignored, because it does not change the probability. Next, we split the exponent into two parts and factor out $b$ to obtain \[ \frac{1}{(p-1)q} \sum_{b=0}^{p-2} \exp \left( \frac{2\pi i}{q} U \right) \exp \left( \frac{2\pi i}{q} V \right), \] (7.31) where \[ U = bT, \] \[ T = rc + d - \frac{r}{p-1} \{c(p-1)\}_q, \] (7.32) and \[ V = \left( \frac{br}{p-1} - \left\lfloor \frac{br+k}{p-1} \right\rfloor \right) \{c(p-1)\}_q. \] (7.33) Here by $\{z\}_q$ we mean the residue of $z \pmod{q}$ with $-q/2 < \{z\}_q \leq q/2$. We will show that if we get enough “good” outputs, then we still can deduce $r$, and that furthermore, the chance of getting a “good” output is constant. The idea is that if \[ |\{T\}_q| = rc + d - \frac{r}{p-1} \{c(p-1)\}_q - jq \leq \frac{1}{2} \] (7.34) where $j$ is the closest integer to $T/q$, then as $b$ varies between 0 and $p-2$, the phase of the first exponential term in Eq. (7.31) only varies over at most half of the unit circle. Further, if \[ |\{c(p-1)\}_q| \leq q/20, \] (7.35) then $|V|$ is always at most $q/20$, so the phase of the second exponential term in Eq. (7.31) never is farther than $\exp(\pi i/10)$ from 1. By combining these two observations, we will show that if both conditions hold, then the contribution to the probability from the corresponding term is significant. Furthermore, both conditions will hold with constant probability, and a reasonable sample of $c$’s for which Condition (7.34) holds will allow us to deduce $r$. We now give a lower bound on the probability of each good output, i.e., an output that satisfies Conditions (7.34) and (7.35). We know that as $b$ ranges from 0 to $p-2$, the phase of $\exp(2\pi i U/q)$ ranges from 0 to $2\pi i W$ where \[ W = \frac{p-2}{q} \left( rc + d - \frac{r}{p-1} \{c(p-1)\}_q - jq \right) \] (7.36) and $j$ is as in Eq. (7.34). Thus, the component of the amplitude of the first exponential in Eq. (7.31) in the direction \[ \exp \left( \pi i W \right) \] (7.37) is at least $\cos(2\pi |W/2 - Wb/(p-2)|)$. Now, by Condition (7.35), the phase can vary by at most $\pi/10$ due to the second exponential $\exp(2\pi i V/q)$. Applying this variation in the manner that minimizes the component in this direction (7.37), we get that the component in this direction is at least $\cos(2\pi |W/2 - Wb/(p-2)| + \pi/10)$. Since $p < q$, and from Condition (7.34), $|W| \leq 1/2$, putting everything together, the probability of arriving at a state $\ket{c,d,y}$ that satisfies both Condition (7.34) and (7.35) is at least \[ \left( \frac{1}{q} \frac{2}{\pi} \int_{\pi/10}^{\pi/20} \cos t \ dt \right)^2, \] (7.38) or at least $.137/q^2$. We will now count the number of pairs $(c,d)$ satisfying Conditions (7.34) and (7.35). The number of pairs $(c,d)$ such that (7.34) holds is exactly the number of possible $c$’s, since for every $c$ there is exactly one $d$ such that (7.34) holds (round off the fraction to the nearest integer to obtain this $d$). The number of $c$’s for which (7.35) holds is approximately $q/10$. Thus, there are $q/10$ pairs $(c,d)$ satisfying both conditions. Multiplying by $p-1$, which is the number of possible $y$’s, gives approximately $pq/10$ states $\ket{c,d,y}$. Combining this calculation with the lower bound on the probability of each good state gives us that the probability of obtaining any good state is at least $p/80q$, or at least $1/160$ (since $q < 2p$). We now want to recover $r$ from a pair $c,d$ such that \[ -\frac{1}{2q} \leq \frac{d}{q} + \frac{r}{q} \left( c - \frac{\{c(p-1)\}_q}{p-1} \right) \leq \frac{1}{2q} \pmod{1}, \] (7.39) i.e., if there is a $d$ such that $$\frac{-r}{2} \leq rc - dq \leq \frac{r}{2}. \quad (6.24)$$ Dividing by $rq$ and rearranging the terms gives $$\left| \frac{c}{q} - \frac{d}{r} \right| \leq \frac{1}{2q}. \quad (6.25)$$ We know $c$ and $q$. Because $q \geq 2n^2$, there is at most one fraction $d/r$ with $r < n$ that satisfies the above inequality. Thus, we can obtain the fraction $d/r$ in lowest terms by rounding $c/q$ to the nearest fraction having a denominator smaller than $q$. This fraction can be found in polynomial time by using a continued fraction expansion of $c/q$, which finds all the best approximations of $c/q$ by fractions [17, Chapter X]. If we have the fraction $d/r$ in lowest terms, and if $d$ happens to be relatively prime to $r$, this will give us $r$. We will now count the number of states $\ket{c, x^k \pmod{n}}$ which enable us to compute $r$ in this way. There are $\phi(r)$ possible values for $d$ relatively prime to $r$, where $\phi$ is Euler’s $\phi$ function. Each of these fractions $d/r$ is close to one fraction $c/q$ with $|c/q - d/r| \leq 1/2q$. There are also $r$ possible values for $x^k$, since $r$ is the order of $x$. Thus, there are $r\phi(r)$ states $\ket{c, x^k \pmod{n}}$ which would enable us to obtain $r$. Since each of these states occurs with probability at least $1/3r^2$, we obtain $r$ with probability at least $\phi(r)/3r$. Using the theorem that $\phi(r)/r > k/\log \log r$ for some fixed $k$ [17, Theorem 328], this shows that we find $r$ at least a $k/\log \log r$ fraction of the time, so by repeating this experiment only $O(\log \log r)$ times, we are assured of a high probability of success. Note that in the algorithm for order, we did not use many of the properties of multiplication $(\mod n)$. In fact, if we have a permutation $f$ mapping the set $\{0, 1, 2, \ldots, n-1\}$ into itself such that its $k$th iterate, $f^{(k)}(a)$, is computable in time polynomial in $\log n$ and $\log k$, the same algorithm will be able to find the order of an element $a$ under $f$, i.e., the minimum $r$ such that $f^{(r)}(a) = a$. ### 7 Discrete log: the general case For the general case, we first find a smooth number $q$ such that $q$ is close to $p$, i.e., with $p \leq q \leq 2p$ (see Lemma 3.2). Next, we do the same thing as in the easy case, that is, we choose $a$ and $b$ uniformly $(\mod p - 1)$, and then compute $g^a x^{-b} \pmod{p}$. This leaves our machine in the state $$\frac{1}{p-1} \sum_{a=0}^{p-2} \sum_{b=0}^{p-2} \ket{a, b, g^a x^{-b} \pmod{p}}. \quad (7.26)$$ As before, we use the Fourier transform $A_q$ to send $a \rightarrow c$ and $b \rightarrow d \pmod{q}$, with amplitude $\frac{1}{q} \exp(2\pi i (ac + bd)/q)$, giving us the state $$\frac{1}{(p-1)q} \sum_{a,b=0}^{p-2} \sum_{c,d=0}^{q-1} \exp\left(\frac{2\pi i}{q}(ac+bd)\right) \ket{c, d, g^a x^{-b} \pmod{p}}. \quad (7.27)$$ for quantum algorithms. Although Bernstein and Vazirani [4] show that the number of bits of precision needed is never more than the logarithm of the number of computational steps a quantum computer takes, in some algorithms it could conceivably require less. Interesting open questions are whether it is possible to do discrete logarithms or factoring with less than polynomial precision and whether some tradeoff between precision and time is possible. 6 Factoring The algorithm for factoring is similar to the one for the general case of discrete log, only somewhat simpler. I present this algorithm before the general case of discrete log so as to give the three algorithms in this paper in order of increasing complexity. Readers interested in discrete log may skip to the next section. Instead of giving a quantum computer algorithm to factor $n$, we will give a quantum computer algorithm for finding the order of an element $x$ in the multiplicative group $(\text{mod } n)$; that is, the least integer $r$ such that $x^r \equiv 1 \pmod{n}$. There is a randomized reduction from factoring to the order of an element [24]. To factor an odd number $n$, given a method for computing the order of an element, we choose a random $x$, find the order $r_x$ of $x$, and compute $\gcd(x^{r_x/2} - 1, n)$. This fails to give a non-trivial divisor of $n$ only if $r_x$ is odd or if $x^{r_x/2} \equiv -1 \pmod{n}$. Using this criterion, it can be shown that the algorithm finds a factor of $n$ with probability at least $1 - 1/2^{k-1}$, where $k$ is the number of distinct odd prime factors of $n$. This scheme will thus work as long as $n$ is not a prime power; however, factoring prime powers can be done efficiently with classical methods. Given $x$ and $n$, to find $r$ such that $x^r \equiv 1 \pmod{n}$, we do the following. First, we find a smooth $q$ with $2n^2 \leq q < 4n^2$. Next, we put our machine in the uniform superposition of states representing numbers $a \pmod{q}$. This leaves our machine in state $$\frac{1}{q^{1/2}} \sum_{a=0}^{q-1} |a\rangle.$$ (6.16) As in the algorithm for discrete log, we will not write $n$, $x$, or $q$ in the state of our machine, because we never change these values. Next, we compute $x^a \pmod{n}$. Since we keep $x$ and $a$ on the tape, this can be done reversibly. This leaves our machine in the state $$\frac{1}{q^{1/2}} \sum_{a=0}^{q-1} |a, x^a \pmod{n}\rangle.$$ (6.17) We then perform our Fourier transform $A_q$ mapping $a \rightarrow c$ with amplitude $\frac{1}{q^{1/2}} \exp(2\pi iac/q)$. This leaves our machine in state $$\frac{1}{q} \sum_{a=0}^{q-1} \exp(2\pi iac/q) |c, x^a \pmod{n}\rangle.$$ (6.18) Finally, we observe the machine. It would be sufficient to observe solely the value of $c$, but for clarity we will assume that we observe both $c$ and $x^a \pmod{n}$. We now compute the probability that our machine ends in a particular state $|c, x^k \pmod{n}\rangle$, where we may assume $0 \leq k < r$. Summing over all possible ways to reach this state, we find that this probability is $$\left| \frac{1}{q} \sum_{a: x^a \equiv x^k} \exp(2\pi iac/q) \right|^2,$$ (6.19) where the sum is over all $a$, $0 \leq a < q$, such that $x^a \equiv x^k \pmod{n}$. Because the order of $x$ is $r$, this sum is equivalently over all $a$ satisfying $a \equiv k \pmod{r}$. Writing $a = br + k$, we find that the above probability is $$\left| \frac{1}{q} \sum_{b=0}^{\lfloor(q-k-1)/r\rfloor} \exp(2\pi i(br + k)c/q) \right|^2.$$ (6.20) We can ignore the term of $\exp(2\pi ikc/q)$, as it can be factored out of the sum and has magnitude 1. We can also replace $rc$ with $\{rc\}_q$, where $\{rc\}_q$ is the residue which is congruent to $rc \pmod{q}$ and is in the range $-q/2 < \{rc\}_q \leq q/2$. This leaves us with the expression $$\left| \frac{1}{q} \sum_{b=0}^{\lfloor(q-k-1)/r\rfloor} \exp(2\pi ib\{rc\}_q/q) \right|^2.$$ (6.21) We will now show that if $\{rc\}_q$ is small enough, all the amplitudes in this sum will be in nearly the same direction, giving a large probability. If $\{rc\}_q$ is small with respect to $q$, we can use the change of variables $t = b/q$ and approximate this sum with the integral $$\left| \int_0^1 \exp(2\pi i\{rc\}_qt)dt \right|^2.$$ (6.22) If $|\{rc\}_q| \leq r/2$, this quantity can be shown to be asymptotically bounded below by $4/(r^2\pi^2)$, and thus at least $1/3r^2$. The exact probabilities as given by Equation (6.21) for an example case are plotted in Figure 1. The probability of seeing a given state $|c, x^k \pmod{n}\rangle$ will thus be at least $1/3r^2$ if $$-\frac{r}{2} \leq \{rc\}_q \leq \frac{r}{2},$$ (6.23) in polynomial time on a quantum machine. This leaves the machine in state \[ \frac{1}{(p-1)^2} \sum_{a,b,c,d=0}^{p-2} \exp \left( \frac{2\pi i}{p-1}(ac+bd) \right) |c, d, g^a x^{-b} \mod p \rangle . \] We now compute the probability that the computation ends with the machine in state \(|c, d, y\rangle\) with \(y \equiv g^k \mod p\). This probability is the absolute value of the square of the sum over all ways the machine could produce this state, or \[ \left| \frac{1}{(p-1)^2} \sum_{a,b,c,d=0}^{p-2} \exp \left( \frac{2\pi i}{p-1}(ac + bd) \right) \right|^2 , \] where the sum is over all \(a, b\) satisfying \(a - rb \equiv k \mod p-1\). This condition arises from the fact that computational paths can only interfere when they give the same \(y \equiv g^{a-rb} \equiv g^k \mod p\). We now substitute the equation \(a \equiv k + rb \mod p-1\) in the above exponential. The above sum then reduces to \[ \left| \frac{1}{(p-1)^2} \sum_{b=0}^{p-2} \exp \left( \frac{2\pi i}{p-1}(kc + b(d + rc)) \right) \right|^2 . \] However, if \(d + rc \not\equiv 0 \mod p-1\) the above sum is over a set of \((p-1)\)st roots of unity evenly spaced around the unit circle, and thus the probability is 0. If \(d \equiv -rc\) the above sum is over the same root of unity \(p-1\) times, giving \((p-1)e^{2\pi i kc/(p-1)}\), so the probability is \(1/(p-1)^2\). We can check that these probabilities add up to one by counting that there are \((p-1)^2\) states \(|c, -rc, y\rangle\) since there are \(p-1\) choices of \(c \mod p-1\) and \(p-1\) choices of \(y \not\equiv 0 \mod p\). Our computation thus produces a random \(c \mod p-1\) and the corresponding \(d \equiv -rc \mod p-1\). If \(c\) and \(p-1\) are relatively prime, we can find \(r\) by division. Because we are choosing among all possible \(c\)'s with equal probability, the chance that \(c\) and \(p-1\) are relatively prime is \(\phi(p-1)/(p-1)\), where \(\phi\) is the Euler \(\phi\)-function. It is easy to check that \(\phi(p-1)/(p-1) > 1/\log(p)\). (Actually, from [17, Theorem 328], \(\liminf \phi(p-1)/(p-1) \approx e^{-\gamma}/\log \log p\).) Thus we only need a number of experiments that is polynomial in \(\log p\) to obtain \(r\) with high probability. In fact, we can find a set of \(c\)'s such that at least one is relatively prime to every prime divisor of \(p-1\) by repeating the experiment only an expected constant number of times. This would also give us enough information to obtain \(r\). 5 A note on precision The number of bits of precision needed in the amplitude of quantum mechanical computers could be a barrier to practicality. The generally accepted theoretical dividing line between feasible and infeasible is that polynomial precision (i.e., a number of bits logarithmic in the problem size) is feasible and that more is infeasible. This is because on a quantum computer the phase angle would need to be obtained through some physical device, and constructing such devices with better than polynomial precision seems unquestionably impractical. In fact, even polynomial precision may prove to be impractical; however, using this as the theoretical dividing line results in nice theoretical properties. We thus need to show that the computations in the previous section need to use only polynomial precision in the amplitudes. The very act of writing down the expression \(\exp(2\pi i ac/(p-1))\) seems to imply that we need exponential precision, as this phase angle is exponentially precise. Fortunately, this is not the case. Consider the same matrix \(A_{p-1}\) with every term \(\exp(2\pi i ac/(p-1))\) replaced by \(\exp(2\pi i ac/(p-1) \pm \pi/20)\). Each positive case, i.e., one resulting in \(d \equiv -rc\), will still occur with nearly as large probability as before; instead of adding \(p-1\) amplitudes which have exactly the same phase angle, we add \(p-1\) amplitudes which have nearly the same phase angle, and thus the size of the sum will only be reduced by a constant factor. The algorithm will thus give a \((c, d)\) with \(d \equiv -rc\) with constant probability (instead of probability 1). Recall that we obtain the matrix \(A_{p-1}\) by multiplying at most \(\log p\) matrices \(A_{q_i}\). Further, each entry in \(A_{p-1}\) is the product of at most \(\log p\) terms. Suppose that each phase angle were off by at most \(\epsilon/\log p\) in the \(A_{q_i}\)'s. Then in the product, each phase angle would be off by at most \(\epsilon\), which is enough to perform the computation with constant probability of success. A similar argument shows that the magnitude of the amplitudes in the \(A_{q_i}\) can be off by a polynomial fraction. Similar arguments hold for the general case of discrete log and for factoring to show that we need only polynomial precision for the amplitudes in these cases as well. We still need to show how to construct \(A_{q_i}\) from constant size unitary matrices having limited precision. The arguments are much the same as above, but we will not give them in this paper because, in fact, Bennett et al. [4] have shown that it is sufficient to use polynomial precision for any computation on a quantum Turing machine to obtain the answer with high probability. Since precision could easily be the limiting factor for practicality of quantum computation, it might be advisable to investigate how much precision is actually needed \( b = \beta_1 q_1 + \beta_2, \) and \( c = \gamma_1 q_1 + \gamma_2. \) Note the asymmetry in the definitions of \( a, b \) and \( c. \) We can now define \( C \) and \( D: \) \[ C(a, b) = \begin{cases} 0 & \text{if } \alpha_2 \neq \beta_1 \\ \frac{1}{q_1} \omega^{\alpha_1 \beta_2 q_2 + \beta_1 \beta_2 (u+1)} & \text{otherwise}, \end{cases} \] (3.7) and \[ D(b, c) = \begin{cases} 0 & \text{if } \beta_2 \neq \gamma_2 \\ \frac{1}{q_2} \omega^{\beta_1 \gamma_1 q_1 - \beta_1 \beta_2 u} & \text{otherwise}. \end{cases} \] (3.8) It is easy to see that \( CD(a, c) = C(a, b)D(b, c) \) where \( b = \alpha_2 q_1 + \gamma_2 \) since we need \( \alpha_2 = \beta_1 \) and \( \beta_2 = \gamma_2 \) to ensure non-zero entries in \( C(a, b) \) and \( D(b, c). \) Now, \[ CD(a, c) = \frac{1}{q_1^{1/2} q_2^{1/2}} \omega^{\alpha_1 \beta_2 q_2 + \beta_1 \beta_2 (u+1) + \beta_1 \gamma_1 q_1 - \beta_1 \beta_2 u} \] \[ = \frac{1}{q_1^{1/2} q_2^{1/2}} \omega^{\alpha_1 \gamma_1 q_2 + \alpha_2 \gamma_1 q_1 + \alpha_2 \gamma_2} \] \[ = \frac{1}{q_1^{1/2} q_2^{1/2}} \omega^{(\alpha_1 q_2 + \alpha_2)(\gamma_1 q_1 + \gamma_2)} \] \[ = \frac{1}{q_1^{1/2} q_2^{1/2}} \omega^{ac} \] (3.9) so \( CD(a, c) = A_{q_1}(a, c). \) We will now sketch how to rearrange the rows and columns of \( C \) to get the matrix \( \bigoplus_{q_2} A_{q_1}. \) The matrix \( C \) can be put in block-diagonal form where the blocks are indexed by \( \alpha_2 = \beta_1 \) (since all entries with \( \alpha_2 \neq \beta_1 \) are 0). Let \( u + 1 \equiv tq_2 \pmod{q}. \) Within a given block \( \alpha_2 = \beta_1, \) the entries look like \[ \sqrt{q_1} C(a, b) = \omega^{\alpha_1 \beta_2 q_2 + \beta_1 \beta_2 (u+1)} \] \[ = \exp(2\pi i (\alpha_2 \beta_1 + \beta_1 \beta_2 t)q_2/q) \] \[ = \exp(2\pi i (\alpha_1 + \alpha_2 t)\beta_2/q_1). \] (3.10) Thus, if we rearrange the rows within this block so that they are indexed by \( \alpha' \equiv \alpha_1 + \alpha_2 t \pmod{q_1}, \) we obtain the transformation \( \alpha' \rightarrow \beta_2 \) with amplitude \( \frac{1}{q_1^{1/2}} \exp(2\pi i \alpha' \beta_2/q_1); \) that is, the transformation given by the unitary matrix with the \( (\alpha', \beta_2) \) entry equal to \( \frac{1}{q_1^{1/2}} \exp(2\pi i \alpha' \beta_2/q_1), \) which is \( A_{q_1}. \) The matrix \( D \) can similarly be rearranged to obtain the matrix \( \bigoplus_{q_1} A_{q_2}. \) We also need to show how to find a smooth \( q \) that lies between \( n \) and \( 2n \) in polynomial time. There are actually smooth \( q \) much closer to \( n \) than this, but this is all we need. It is not known how to find smooth numbers very close to \( n \) in polynomial time. **Lemma 3.2** Given \( n, \) there is a polynomial-time algorithm to find a number \( q \) with \( n \leq q < 2n \) such that no prime power larger than \( c \log q \) divides \( q, \) for some constant \( c \) independent of \( n. \) **Proof:** To find such a \( q, \) multiply the primes \( 2 \cdot 3 \cdot 5 \cdot 7 \cdot 11 \cdots p_k \) until the product is larger than \( n. \) Now, if this product is larger than \( 2n, \) divide it by the largest prime that keeps the number larger than \( n. \) This produces the desired \( q. \) There is always a prime between \( m \) and \( 2m \) [17, Theorem 418], so \( n \leq q < 2n. \) The prime number theorem [17, Theorem 6] and some calculation show that the largest prime dividing \( q \) is of size \( O(\log n). \) Note that if we are using Coppersmith’s transformation \( A_{p-1} \) using the \( 2^k \)th roots of unity, we set \( q = 2^k \) where \( k = \lceil \log_2 n \rceil + 1. \) ### 4 Discrete log: the easy case The discrete log problem is: given a prime \( p, \) a generator \( g \) of the multiplicative group \( (\mathbb{Z}/p\mathbb{Z})^\times \) and an \( x \pmod{p}, \) find an \( r \) such that \( g^r \equiv x \pmod{p}. \) We will start by giving a polynomial-time algorithm for discrete log on a quantum computer in the case that \( p - 1 \) is smooth. This algorithm is analogous to the algorithm in Simon’s paper [29], with the group \( \mathbb{Z}_p^\times \) replaced by \( \mathbb{Z}_{p-1}. \) The smooth case is not in itself an interesting accomplishment, since there are already polynomial time algorithms for classical computers in this case [25]; however, explaining this case is easier than explaining either the general case of discrete log or the factoring algorithm, and as the three algorithms are similar, this example will illuminate how the more complicated algorithms work. We will start our algorithm with \( x, g \) and \( p \) on the tape (i.e., in the quantum memory of our machine). We are trying to compute \( r \) such that \( g^r \equiv x \pmod{p}. \) Since we will never delete them, \( x, g, \) and \( p \) are constants, and we will specify a state of our machine by the other contents of the tape. The algorithm starts out by “choosing” numbers \( a \) and \( b \pmod{p-1} \) uniformly, so the state of the machine after this step is \[ \frac{1}{p-1} \sum_{a=0}^{p-2} \sum_{b=0}^{p-2} |a, b\rangle. \] (4.11) The algorithm next computes \( g^a x^{-b} \pmod{p} \) reversibly, so we must keep the values \( a \) and \( b \) on the tape. The state of the machine is now \[ \frac{1}{p-1} \sum_{a=0}^{p-2} \sum_{b=0}^{p-2} |a, b, g^a x^{-b} \pmod{p}\rangle. \] (4.12) What we do now is use the transformation \( A_{p-1} \) to map \( a \rightarrow c \) with amplitude \( \frac{1}{(p-1)^{1/2}} \exp(2\pi i ac/(p-1)) \) and \( b \rightarrow d \) with amplitude \( \frac{1}{(p-1)^{1/2}} \exp(2\pi ibd/(p-1)). \) As was discussed in the previous section, this is a unitary transformation, and since \( p-1 \) is smooth it can be accomplished From results on reversible computation [3, 19, 31], we can compute any polynomial time function \( f(a) \) as long as we keep the input, \( a \), on the machine. To erase \( a \) and replace it with \( f(a) \) we need in addition that \( f \) is one-to-one and that \( a \) is computable in polynomial time from \( f(a) \); i.e., that both \( f \) and \( f^{-1} \) are polynomial-time computable. Fact 2: Any polynomial size unitary matrix can be approximated using a polynomial number of elementary unitary transformations [10, 5, 33] and thus can be approximated in polynomial time on a quantum computer. Further, this approximation is good enough so as to introduce at most a bounded probability of error into the results of the computation. ### 3 Building unitary transformations Since quantum computation deals with unitary transformations, it is helpful to be able to build certain useful unitary transformations. In this section we give some techniques for constructing unitary transformations on quantum machines, which will result in our showing how to construct one particular unitary transformation in polynomial time. These transformations will generally be given as matrices, with both rows and columns indexed by states. These states will correspond to representations of integers on the computer; in particular, the rows and columns will be indexed beginning with 0 unless otherwise specified. A tool we will use repeatedly in this paper is the following unitary transformation, the summation of which gives a Fourier transform. Consider a number \( a \) with \( 0 \leq a < q \) for some \( q \) where the number of bits of \( q \) is polynomial. We will perform the transformation that takes the state \( |a\rangle \) to the state \[ \frac{1}{q^{1/2}} \sum_{b=0}^{q-1} |b\rangle \exp(2\pi i ab/q). \] That is, we apply the unitary matrix whose \( (a, b) \)'th entry is \( \frac{1}{q^{1/2}} \exp(2\pi i ab/q) \). This transformation is at the heart of our algorithms, and we will call this matrix \( A_q \). Since we will use \( A_q \) for \( q \) of exponential size, we must show how this transformation can be done in polynomial time. In fact, we will only be able to do this for *smooth numbers* \( q \), that is, ones with small prime factors. In this paper, we will deal with *smooth numbers* \( q \) which contain no prime power factor that is larger than \( (\log q)^c \) for some fixed \( c \). It is also possible to do this transformation in polynomial time for all smooth numbers \( q \); Coppersmith shows how to do this for \( q = 2^k \) using what is essentially the fast Fourier transform, and that this substantially reduces the number of operations required to factor [8]. If we know a factorization \( q = q_1 q_2 q_3 \cdots q_k \) where \( \gcd(q_i, q_j) = 1 \) and where \( k \) and all of the \( q_i \) are of polynomial size we will show how to build the transformation \( A_q \) in polynomial time by composing the \( A_{q_i} \). For this, we first need a lemma on quantum computation. **Lemma 3.1** Suppose the matrix \( B \) is a block-diagonal \( mn \times mn \) unitary matrix composed of \( n \) identical unitary \( m \times m \) matrices \( B' \) along the diagonal and 0's everywhere else. Suppose further that the state transformation \( B' \) can be done in time \( T(B') \) on a quantum Turing machine. Then the matrix \( B \) can be done in \( T(B') + (\log mn)^c \) time on a quantum Turing machine, where \( c \) is a constant. We will call this matrix \( B \) the direct sum of \( n \) copies of \( B' \) and use the notation \( B = \bigoplus_n B' \). This matrix \( B \) is the tensor product of \( B' \) and \( I_n \), where \( I_n \) is the \( n \times n \) identity matrix. Proof: Suppose that we have a number \( a \) on our tape. We can reversibly compute \( \alpha_1 \) and \( \alpha_2 \) from \( a \) where \( a = m\alpha_1 + \alpha_2 \). This computation erases \( a \) from our tape and replaces it with \( \alpha_1 \) and \( \alpha_2 \). Now \( \alpha_1 \) tells in which block the row \( a \) is contained, and \( \alpha_2 \) tells which row of the matrix within that block is the row \( a \). We can then apply \( B' \) to \( \alpha_2 \) to obtain \( \beta_2 \) (erasing \( \alpha_2 \) in the process). Now, combining \( \alpha_1 \) and \( \beta_2 \) to obtain \( b = m\alpha_1 + \beta_2 \) gives the result of \( B \) applied to \( a \) (with the right amplitude). The computation of \( B' \) takes \( T(B') \) time, and the rest of the computation is polynomial in \( \log m + \log n \). We now show how to obtain \( A_q \) for smooth \( q \). We will decompose \( A_q \) into a product of a polynomial number of unitary transformations, all of which are performable in polynomial time; this enables us to construct \( A_q \) in polynomial time. Suppose that we have \( q = q_1 q_2 \) with \( \gcd(q_1, q_2) = 1 \). What we will do is represent \( A_q = CD \), where by rearranging the rows and columns of \( D \) we obtain \( \bigoplus_{i} A_{q_i} \) and rearranging the rows and columns of \( C \) we obtain \( \bigoplus_{j} A_{q_j} \). As long as these rearrangements of the rows and columns of \( C \) and \( D \) are performable in polynomial time (i.e., given row \( r \), we can find in polynomial time the row \( r' \) to which it is taken) and the inverse operations are also performable in polynomial time, then by using the lemma above and recursion we can obtain a polynomial-time way to perform \( A_q \) on a quantum computer. We now need to define \( C \) and \( D \) and check that \( A_q = CD \). To define \( C \) and \( D \) we need some preliminary definitions. Recall that \( q = q_1 q_2 \) with \( q_1 \) and \( q_2 \) relatively prime. Let \( \omega = \exp(2\pi i / q) \). Let \( u \) be the number \( (\mod q) \) such that \( u \equiv 0 \pmod{q_1} \) and \( u \equiv -1 \pmod{q_2} \). Such a number exists by the Chinese remainder theorem, and can be computed in polynomial time. We will decompose row and column indices \( a, b \) and \( c \) as follows: \( a = \alpha_1 q_2 + \alpha_2 \), toring, is in use. We show that these problems can be solved in BQP. Currently, nobody knows how to build a quantum computer, although it seems as though it could be possible within the laws of quantum mechanics. Some suggestions have been made as to possible designs for such computers [30, 22, 23, 12], but there will be substantial difficulty in building any of these [18, 32]. Even if it is possible to build small quantum computers, scaling up to machines large enough to do interesting computations could present fundamental difficulties. It is hoped that this paper will stimulate research on whether it is feasible to actually construct a quantum computer. Even if no quantum computer is ever built, this research does illuminate the problem of simulating quantum mechanics on a classical computer. Any method of doing this for an arbitrary Hamiltonian would necessarily be able to simulate a quantum computer. Thus, any general method for simulating quantum mechanics with at most a polynomial slowdown would lead to a polynomial algorithm for factoring. 2 Quantum computation In this section we will give a brief introduction to quantum computation, emphasizing the properties that we will use. For a more complete overview I refer the reader to Simon’s paper in this proceedings [29] or to earlier papers on quantum computational complexity theory [5, 33]. In quantum physics, an experiment behaves as if it proceeds down all possible paths simultaneously. Each of these paths has a complex probability amplitude determined by the physics of the experiment. The probability of any particular outcome of the experiment is proportional to the square of the absolute value of the sum of the amplitudes of all the paths leading to that outcome. In order to sum over a set of paths, the outcomes have to be identical in all respects, i.e., the universe must be in the same state. A quantum computer behaves in much the same way. The computation proceeds down all possible paths at once, and each path has associated with it a complex amplitude. To determine the probability of any final state of the machine, we add the amplitudes of all the paths which reach that final state, and then square the absolute value of this sum. An equivalent way of looking at this process is to imagine that the machine is in some superposition of states at every step of the computation. We will represent this superposition of states as $$\sum_i a_i |S_i\rangle,$$ where the amplitudes $a_i$ are complex numbers such that $\sum_i |a_i|^2 = 1$ and each $|S_i\rangle$ is a basis state of the machine; in a quantum Turing machine, a basis state is defined by what is written on the tape and by the position and state of the head. In a quantum circuit a basis state is defined by the values of the signals on all the wires at some level of the circuit. If the machine is examined at a particular step, the probability of seeing basis state $|S_j\rangle$ is $|a_j|^2$; however, by the Heisenberg uncertainty principle, looking at the machine during the computation will disturb the rest of the computation. The laws of quantum mechanics only permit unitary transformations of the state. A unitary matrix is one whose conjugate transpose is equal to its inverse, and requiring state transformations to be represented by unitary matrices ensures that the probabilities of obtaining all possible outcomes will add up to one. Further, the definitions of quantum Turing machine and quantum circuit only allow local unitary transformations, that is, unitary transformations on a fixed number of bits. Perhaps an example will be informative at this point. Suppose our machine is in the superposition of states $$\frac{1}{\sqrt{2}} (|00\rangle + \frac{1}{2} |100\rangle - \frac{1}{2} |110\rangle)$$ and we apply the unitary transformation $$\begin{array}{c|cccc} & 00 & 01 & 10 & 11 \\ \hline 00 & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ 01 & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ 10 & \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ 11 & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} \end{array}$$ to the last two bits of our state. That is, we multiply the last two bits of the components of the vector (2.2) by the matrix (2.3). The machine will then go to the superposition of states $$\frac{1}{2\sqrt{2}} (|000\rangle + |001\rangle + |010\rangle + |011\rangle) + \frac{1}{2} |101\rangle + \frac{1}{2} |111\rangle.$$ Notice that the result would have been different had we started with the superposition of states $$\frac{i}{\sqrt{2}} |000\rangle + \frac{1}{2} |100\rangle + \frac{1}{2} |110\rangle,$$ which has the same probabilities of being in any particular configuration if it is observed. We now give certain properties of quantum computation that will be useful. These facts are not immediately apparent from the definition of quantum Turing machine or quantum circuit, and they are very useful for constructing algorithms for quantum machines. Fact 1: A deterministic computation is performable on a quantum computer if and only if it is reversible. of exponential search problems. These are problems which may require the search of an exponential size space to find the solution, but for which the solution, once found, may be verified in polynomial time (possibly with a polynomial amount of additional supporting evidence). We will also discuss two other traditional complexity classes. One is BPP, which are problems which can be solved with high probability in polynomial time, given access to a random number generator. The other is P\#P, which are those problems which could be solved in polynomial time if sums of exponentially many terms could be computed efficiently (where these sums must satisfy the requirement that each term is computable in polynomial time). These classes are related as follows: \[ P \subseteq BPP, \quad NP \subseteq P\#P \subseteq PSPACE. \] The relationship of BPP and NP is not known. The question of whether using quantum mechanics in a computer allows one to obtain more computational power has not yet been satisfactorily answered. This question was addressed in [11, 6, 7], but it was not shown how to solve any problem in quantum polynomial time that was not known to be solvable in BPP. Recent work on this problem was stimulated by Bernstein and Vazirani’s paper [5] which laid the foundations of the quantum computation theory of computational complexity. One of the results contained in this paper was an oracle problem (a problem involving a “black box” subroutine, i.e., a function that the computer is allowed to perform, but for which no code is accessible,) which can be done in polynomial time on a quantum Turing machine and requires super-polynomial time on a classical computer. This was the first indication, other than the fact that nobody knew how to simulate a quantum computer on a classical computer without an exponential slowdown, that quantum computation might obtain a greater than polynomial speedup over classical computation augmented with a random number generator. This result was improved by Simon [29], who gave a much simpler construction of an oracle problem which takes polynomial time on a quantum computer and requires exponential time on a classical computer. Indeed, by viewing Simon’s oracle as a subroutine, this result becomes a promise problem which takes polynomial time on a quantum computer and looks as if it would be very difficult on a classical computer (a promise problem is one where the input is guaranteed to satisfy some condition). The algorithm for the “easy case” of discrete log given in this paper is directly analogous to Simon’s algorithm with the group \( Z_2^k \) replaced by the group \( Z_{p-1} \); I was only able to discover this algorithm after seeing Simon’s paper. In another result in Bernstein and Vazirani’s paper, a particular class of quantum Turing machine was rigorously defined and a universal quantum Turing machine was given which could simulate any other quantum Turing machine of this class. Unfortunately, it was not clear whether these quantum Turing machines could simulate other classes of quantum Turing machines, so this result was not entirely satisfactory. Yao [33] has remedied the situation by showing that quantum Turing machines can simulate, and be simulated by, uniform families of polynomial size quantum circuits, with at most polynomial slowdown. He has further defined quantum Turing machines with \( k \) heads and showed that these machines can be simulated with slowdown of a factor of \( 2^k \). This seems to show that the class of problems which can be solved in polynomial time on one of these machines, possibly with a bounded probability \( \varepsilon < 1/3 \) of error, is reasonably robust. This class is called BQP in analogy to the classical complexity class BPP, which are those problems which can be solved with a bounded probability of error on a probabilistic Turing machine. This class BQP could be considered the class of problems that are efficiently solvable on a quantum Turing machine. Since \( BQP \subseteq P\#P \subseteq PSPACE \) [5], any non-relativized proof that BQP is strictly larger than BPP would imply the structural complexity result \( BPP \subsetneq PSPACE \) which is not yet proven. In view of this difficulty, several approaches come to mind; one is showing that \( BQP \subsetneq BPP \) would lead to a collapse of classical complexity classes which are believed to be different. A second approach is to prove results relative to an oracle. In Bennett et al. [4] it is shown that relative to a random oracle, it is not the case that \( NP \subseteq BQP \). This proof in fact suggests that a quantum computer cannot invert one-way functions, but only proves this for one-way oracles, i.e. “black box” functions given as a subroutine which the quantum computer is not allowed to look inside. Such oracle results have been misleading in the past, most notably in the case of \( IP = PSPACE \) [15, 28]. A third approach, which we take, is to solve in BQP some well-studied problem for which no polynomial time algorithm is known. This shows that the extra power conferred by quantum interference is at least hard to achieve using classical computation. Both Bernstein and Vazirani [5] and Simon [29] also gave polynomial time algorithms for problems which were not known to be in BPP, but these problems were invented especially for this purpose, although Simon’s problem does not appear contrived and could conceivably be useful. Discrete logarithms and integer factoring are two number-theory problems which have been studied extensively but for which no polynomial-time algorithms are known [16, 20, 21, 26]. In fact, these problems are so widely believed to be hard that cryptosystems based on their hardness have been proposed, and the RSA public key cryptosystem [27], based on the hardness of facAlgorithms for Quantum Computation: Discrete Logarithms and Factoring Peter W. Shor AT&T Bell Labs Room 2D-149 600 Mountain Ave. Murray Hill, NJ 07974, USA Abstract A computer is generally considered to be a universal computational device; i.e., it is believed able to simulate any physical computational device with a increase in computation time of at most a polynomial factor. It is not clear whether this is still true when quantum mechanics is taken into consideration. Several researchers, starting with David Deutsch, have developed models for quantum mechanical computers and have investigated their computational properties. This paper gives Las Vegas algorithms for finding discrete logarithms and factoring integers on a quantum computer that take a number of steps which is polynomial in the input size, e.g., the number of digits of the integer to be factored. These two problems are generally considered hard on a classical computer and have been used as the basis of several proposed cryptosystems. (We thus give the first examples of quantum cryptanalysis.) 1 Introduction Since the discovery of quantum mechanics, people have found the behavior of the laws of probability in quantum mechanics counterintuitive. Because of this behavior, quantum mechanical phenomena behave quite differently than the phenomena of classical physics that we are used to. Feynman seems to have been the first to ask what effect this has on computation [13, 14]. He gave arguments as to why this behavior might make it intrinsically computationally expensive to simulate quantum mechanics on a classical (or von Neumann) computer. He also suggested the possibility of using a computer based on quantum mechanical principles to avoid this problem, thus implicitly asking the converse question: by using quantum mechanics in a computer can you compute more efficiently than on a classical computer. Other early work in the field of quantum mechanics and computing was done by Benioff [1, 2]. Although he did not ask whether quantum mechanics conferred extra power to computation, he did show that a Turing machine could be simulated by the reversible unitary evolution of a quantum process, which is a necessary prerequisite for quantum computation. Deutsch [9, 10] was the first to give an explicit model of quantum computation. He defined both quantum Turing machines and quantum circuits and investigated some of their properties. The next part of this paper discusses how quantum computation relates to classical complexity classes. We will thus first give a brief intuitive discussion of complexity classes for those readers who do not have this background. There are generally two resources which limit the ability of computers to solve large problems: time and space (i.e., memory). The field of analysis of algorithms considers the asymptotic demands that algorithms make for these resources as a function of the problem size. Theoretical computer scientists generally classify algorithms as efficient when the number of steps of the algorithms grows as a polynomial in the size of the input. The class of problems which can be solved by efficient algorithms is known as P. This classification has several nice properties. For one thing, it does a reasonable job of reflecting the performance of algorithms in practice (although an algorithm whose running time is the tenth power of the input size, say, is not truly efficient). For another, this classification is nice theoretically, as different reasonable machine models produce the same class P. We will see this behavior reappear in quantum computation, where different models for quantum machines will vary in running times by no more than polynomial factors. There are also other computational complexity classes discussed in this paper. One of these is PSPACE, which are those problems which can be solved with an amount of memory polynomial in the input size. Another important complexity class is NP, which intuitively is the class
Intercepting Mobile Communications: The Insecurity of 802.11 Nikita Borisov UC Berkeley email@example.com Ian Goldberg* Zero-Knowledge Systems firstname.lastname@example.org David Wagner UC Berkeley email@example.com ABSTRACT The 802.11 standard for wireless networks includes a Wired Equivalent Privacy (WEP) protocol, used to protect link-layer communications from eavesdropping and other attacks. We have discovered several serious security flaws in the protocol, stemming from misapplication of cryptographic primitives. The flaws lead to a number of practical attacks that demonstrate that WEP fails to achieve its security goals. In this paper, we discuss in detail each of the flaws, the underlying security principle violations, and the ensuing attacks. 1. INTRODUCTION In recent years, the proliferation of laptop computers and PDA’s has caused an increase in the range of places people perform computing. At the same time, network connectivity is becoming an increasingly integral part of computing environments. As a result, wireless networks of various kinds have gained much popularity. But with the added convenience of wireless access come new problems, not the least of which are heightened security concerns. When transmissions are broadcast over radio waves, interception and masquerading becomes trivial to anyone with a radio, and so there is a need to employ additional mechanisms to protect the communications. The 802.11 standard [15] for wireless LAN communications introduced the Wired Equivalent Privacy (WEP) protocol in an attempt to address these new problems and bring the security level of wireless systems closer to that of wired ones. The primary goal of WEP is to protect the confidentiality of user data from eavesdropping. WEP is part of an international standard; it has been integrated by manufacturers into their 802.11 hardware and is currently in widespread use. Unfortunately, WEP falls short of accomplishing its security goals. Despite employing the well-known and believed-secure RC4 [16] cipher, WEP contains several major security flaws. The flaws give rise to a number of attacks, both passive and active, that allow eavesdropping on, and tampering with, wireless transmissions. In this paper, we discuss the flaws that we identified and describe the attacks that ensue. The following section is devoted to an overview of WEP and the threat models that it is trying to address. Sections 3 and 4 identify particular flaws and the corresponding attacks, and also discuss the security principles that were violated. Section 5 describes potential countermeasures. Section 6 suggests some general lessons that can be derived from the WEP insecurities. Finally, Section 7 offers some conclusions. 2. THE WEP PROTOCOL The Wired Equivalent Privacy protocol is used in 802.11 networks to protect link-level data during wireless transmission. It is described in detail in the 802.11 standard [15]; we reproduce a brief description to enable the following discussion of its properties. WEP relies on a secret key $k$ shared between the communicating parties to protect the body of a transmitted frame of data. Encryption of a frame proceeds as follows: **Checksumming:** First, we compute an integrity checksum $c(M)$ on the message $M$. We concatenate the two to obtain a plaintext $P = \langle M, c(M) \rangle$, which will be used as input to the second stage. Note that $c(M)$, and thus $P$, does not depend on the key $k$. **Encryption:** In the second stage, we encrypt the plaintext $P$ derived above using RC4. We choose an initialization vector (IV) $v$. The RC4 algorithm generates a keystream—i.e., a long sequence of pseudorandom bytes—as a function of the IV $v$ and the key $k$. This keystream is denoted by $\text{RC4}(v, k)$. Then, we exclusive-or (XOR, denoted by $\oplus$) the plaintext with the keystream to obtain the ciphertext: $$C = P \oplus \text{RC4}(v, k).$$ **Transmission:** Finally, we transmit the IV and the ciphertext over the radio link. Symbolically, this may be represented as follows: $$A \rightarrow B : v, (P \oplus \text{RC4}(v, k)) \quad \text{where} \quad P = \langle M, c(M) \rangle.$$ The format of the encrypted frame is also shown pictorially in Figure 1. --- *The work was done while Ian Goldberg was a student at UC Berkeley. 1 A public description of the alleged RC4 algorithm can be found in [17]. We will consistently use the term *message* (symbolically, $M$) to refer to the initial frame of data to be protected, the term *plaintext* ($P$) to refer to the concatenation of message and checksum as it is presented to the RC4 encryption algorithm, and the term *ciphertext* ($C$) to refer to the encryption of the plaintext as it is transmitted over the radio link. To decrypt a frame protected by WEP, the recipient simply reverses the encryption process. First, he regenerates the keystream $\text{RC4}(v, k)$ and XORs it against the ciphertext to recover the initial plaintext: $$P' = C \oplus \text{RC4}(v, k)$$ $$= (P \oplus \text{RC4}(v, k)) \oplus \text{RC4}(v, k)$$ $$= P.$$ Next, the recipient verifies the checksum on the decrypted plaintext $P'$ by splitting it into the form $(M', c')$, re-computing the checksum $c(M')$, and checking that it matches the received checksum $c'$. This ensures that only frames with a valid checksum will be accepted by the receiver. ### 2.1 Security Goals The WEP protocol is intended to enforce three main security goals [15]: **Confidentiality:** The fundamental goal of WEP is to prevent casual eavesdropping. **Access control:** A second goal of the protocol is to protect access to a wireless network infrastructure. The 802.11 standard includes an optional feature to discard all packets that are not properly encrypted using WEP, and manufacturers advertise the ability of WEP to provide access control. **Data integrity:** A related goal is to prevent tampering with transmitted messages; the integrity checksum field is included for this purpose. In all three cases, the claimed security of the protocol “relies on the difficulty of discovering the secret key through a brute-force attack” [15]. There are actually two classes of WEP implementation: classic WEP, as documented in the standard, and an extended version developed by some vendors to provide larger keys. The WEP standard specifies the use of 40-bit keys, so chosen because of US Government restrictions on the export of technology containing cryptography, which were in effect at the time the protocol was drafted. This key length is short enough to make brute-force attacks practical to individuals and organizations with fairly modest computing resources [3, 8]. However, it is straightforward to extend the protocol to use larger keys, and several equipment manufacturers offer a so-called “128-bit” version (which actually uses 104-bit keys, despite its misleading name). This extension renders brute-force attacks impossible for even the most resourceful of adversaries given today’s technology. Nonetheless, we will demonstrate that there are shortcut attacks on the system that do not require a brute-force attack on the key, and thus even the 128-bit versions of WEP are not secure. In the remainder of this paper, we will argue that none of the three security goals are attained. First, we show practical attacks that allow eavesdropping. Then, we show that it is possible to subvert the integrity checksum field and to modify the contents of a transmitted message, violating data integrity. Finally, we demonstrate that our attacks can be extended to inject completely new traffic into the network. A number of these results (particularly the IV reuse weaknesses described in Section 3) have been anticipated in earlier independent work by Simon et. al [19] and by Walker [24]. The serious flaws in the WEP checksum (see Section 4), however, to the best of our knowledge have not been reported before. After our work was completed, Arbaugh et. al have found several extensions that may make these weaknesses even more dangerous in practice [2, 1]. ### 2.2 Attack Practicality Before describing the attacks, we would like to discuss the feasibility of mounting them in practice. In addition to the cryptographic considerations discussed in the sections to follow, a common barrier to attacks on communication subsystems is access to the transmitted data. Despite being transmitted over open radio waves, 802.11 traffic requires significant infrastructure to intercept. An attacker needs equipment capable of monitoring 2.4GHz frequencies and understanding the physical layer of the 802.11 protocol; for active attacks, it is also necessary to transmit at the same frequencies. A significant development cost for equipment manufacturers lies in creating technologies that can reliably perform this task. As such, there might be temptation to dismiss attacks requiring link-layer access as impractical; for instance, this was once established practice among the cellular industry. However, such a position is dangerous. First, it does not safeguard against highly resourceful attackers who have the ability to incur significant time and equipment costs to gain access to data. This limitation is especially dangerous when securing a company’s internal wireless network, since corporate espionage can be a highly profitable business. Second, the necessary hardware to monitor and inject 802.11 traffic is readily available to consumers in the form of wireless Ethernet interfaces. All that is needed is to subvert it to monitor and transmit encrypted traffic. We were successfully able to carry out passive attacks using off-the-shelf equipment by modifying driver settings. Active attacks appear to be more difficult, but not beyond reach. The PCMCIA Orinoco cards produced by Lucent allow their firmware to be upgraded; a concerted reverse-engineering effort should be able to produce a modified version that allows injecting arbitrary traffic. The time investment required is non-trivial; however, it is a one-time effort—the rogue firmware can then be posted on a web site or distributed amongst underground circles. Therefore, we believe that it would be prudent to assume that motivated attackers will have full access to the link layer for passive and even active attacks. Further supporting our position are the WEP documents themselves. They state: “Eavesdropping is a familiar problem to users of other types of wireless technology” [15, p.61]. We will not discuss the difficulties of link layer access further, and focus on cryptographic properties of the attacks. 3. THE RISKS OF KEYSTREAM REUSE WEP provides data confidentiality using a stream cipher called RC4. Stream ciphers operate by expanding a secret key (or, as in the case of WEP, a public IV and a secret key) into an arbitrarily long “keystream” of pseudorandom bits. Encryption is performed by XORing the generated keystream with the plaintext. Decryption consists of generating the identical keystream based on the IV and secret key and XORing it with the ciphertext. A well-known pitfall of stream ciphers is that encrypting two messages under the same IV and key can reveal information about both messages: If \[ C_1 = P_1 \oplus \text{RC4}(v, k) \] and \[ C_2 = P_2 \oplus \text{RC4}(v, k) \] then \[ C_1 \oplus C_2 = (P_1 \oplus \text{RC4}(v, k)) \oplus (P_2 \oplus \text{RC4}(v, k)) \] \[ = P_1 \oplus P_2. \] In other words, XORing the two ciphertexts (\( C_1 \) and \( C_2 \)) together causes the keystream to cancel out, and the result is the XOR of the two plaintexts (\( P_1 \oplus P_2 \)). Thus, keystream reuse can lead to a number of attacks: as a special case, if the plaintext of one of the messages is known, the plaintext of the other is immediately obtainable. More generally, real-world plaintexts often have enough redundancy that one can recover both \( P_1 \) and \( P_2 \) given only \( P_1 \oplus P_2 \); there are known techniques, for example, for solving such plaintext XORs by looking for two English texts that XOR to the given value \( P_1 \oplus P_2 \) [7]. Moreover, if we have \( n \) ciphertexts that all reuse the same keystream, we have what is known as a problem of depth \( n \). Reading traffic in depth becomes easier as \( n \) increases, since the pairwise XOR of every pair of plaintexts can be computed, and many classical techniques are known for solving such problems (e.g., frequency analysis, dragging cribs, and so on) [20, 22]. Note that there are two conditions required for this class of attacks to succeed: - The availability of ciphertexts where some portion of the keystream is used more than once, and - Partial knowledge of some of the plaintexts. To prevent these attacks, WEP uses a per-packet IV to vary the keystream generation process for each frame of data transmitted. WEP generates the keystream \( \text{RC4}(v, k) \) as a function of both the secret key \( k \) (which is the same for all packets) and a public initialization vector \( v \) (which varies for each packet); this way, each packet receives a different keystream. The IV is included in the unencrypted portion of the transmission so that the receiver can know what IV to use when deriving the keystream for decryption. The IV is therefore available to attackers as well\(^2\), but the secret key remains unknown and maintains the security of the keystream. The use of a per-packet IV was intended to prevent keystream reuse attacks. Nonetheless, WEP does not achieve this goal. We describe below several realistic keystream reuse attacks on WEP. First, we discuss how to find instances of keystream reuse; then, we show how to exploit these instances by taking advantage of partial information on how typical plaintexts are expected to be distributed. Finding instances of keystream reuse. One potential cause of keystream reuse comes from improper IV management. Note that, since the shared secret key \( k \) generally changes very rarely, reuse of IV’s almost always causes reuse of some of the RC4 keystream. Since IV’s are public, duplicate IV’s can be easily detected by the attacker. Therefore, any reuse of old IV values exposes the system to keystream reuse attacks. We call such a reuse of an IV value a “collision”. The WEP standard recommends (but does not require) that the IV be changed after every packet. However, it does not say anything else about how to select IV’s, and, indeed, some implementations do it poorly. The particular PCMCIA cards that we examined reset the IV to 0 each time they were re-initialized, and then incremented the IV by one for each packet transmitted. These cards re-initialize themselves each time they are inserted into the laptop, which can be expected to happen fairly frequently. Consequently, keystreams corresponding to low-valued IV’s were likely to be reused many times during the lifetime of the key. Even worse, the WEP standard has architectural flaws that expose all WEP implementations — no matter how cautious — to serious risks of keystream reuse. The IV field used by WEP is only 24 bits wide, nearly guaranteeing that the same IV will be reused for multiple messages. A back-of-the-envelope calculation shows that a busy access point sending 1500 byte packets and achieving an average 5Mbps bandwidth (the full transmission rate is 11Mbps) will exhaust the available space in less than half a day. Even for less busy installations, a patient attacker can readily find duplicates. Because the IV length is fixed at 24 bits in the standard, this vulnerability is fundamental: no compliant implementation can avoid it. Implementation details can make keystream reuse occur even more frequently. An implementation that uses a random 24-bit IV for each packet will be expected to incur collisions after transmitting just 5000 packets\(^3\), which is only a few minutes of transmission. Worse yet, the 802.11 standard does not even require that the IV be changed with every packet, so an implementation could reuse the same IV for all packets without risking non-compliance! Exploiting keystream reuse to read encrypted traffic. \(^2\)Interestingly enough, some marketing literature disregards this fact: one manufacturer advertises 64-bit cipher strength on their products, even though only a 40-bit secret key is used along with a 24-bit public IV. \(^3\)This is a consequence of the so-called “birthday paradox”. Once two encrypted packets that use the same IV are discovered, various methods of attack can be applied to recover the plaintext. If the plaintext of one of the messages is known, it is easy to derive the contents of the other one directly. There are many ways to obtain plausible candidates for the plaintext. Many fields of IP traffic are predictable, since protocols use well-defined structures in messages, and the contents of messages are frequently predictable. For example, login sequences are quite uniform across many users, and so the contents — for example, the Password: prompt or the welcome message — may be known to the attacker and thus usable in a keystream reuse attack. As another example, it may be possible to recognize a specific shared library being transferred from a networked file system by analyzing traffic patterns and lengths; this would provide a large quantity of known plaintext suitable for use in a keystream reuse attack. There are also other, sneakier, ways to obtain known plaintext. It is possible to cause known plaintext to be transmitted by, for example, sending IP traffic directly to a mobile host from an Internet host under the attacker’s control. The attacker may also send e-mail to users and wait for them to check it over a wireless link. Sending spam e-mail might be a good method of doing this without raising too many alarms. Sometimes, obtaining known plaintext in this way may be even simpler. One access point we tested would transmit broadcast packets in both encrypted and unencrypted form, when the option to control network access was disabled. In this scenario, an attacker with a conforming 802.11 interface can transmit broadcasts to the access point (they will be accepted, since access control is turned off) and observe their encrypted form as they are re-transmitted. Indeed, this is unavoidable on a subnet that contains a mixture of WEP clients with and without support for encryption: since broadcast packets must be forwarded to all clients, there is no way to avoid this technique for gathering known plaintext. Finally, we remind the reader that even when known plaintext is not available, some analysis is still possible if an educated guess about the structure of the plaintexts can be made, as noted earlier. ### 3.1 Decryption Dictionaries Once the plaintext for an intercepted message is obtained, either through analysis of colliding IV’s, or through other means, the attacker also learns the value of the keystream used to encrypt the message. It is possible to use this keystream to decrypt any other message that uses the same IV. Over time, the attacker can build a table of the keystreams corresponding to each IV. The full table has modest space requirements—perhaps 1500 bytes for each of the $2^{24}$ possible IV’s, or roughly 24 GB—so it is conceivable that a dedicated attacker can, after some amount of effort, accumulate enough data to build a full decryption dictionary, especially when one considers the low frequency with which keys are changed (see Section 3.2). The advantage to the attacker is that, once such a table is available, it becomes possible to immediately decrypt each subsequent ciphertext with very little work. Of course, the amount of work necessary to build such a dictionary restricts this attack to only the most persistent attackers who are willing to invest time and resources into defeating WEP security. It can be argued that WEP is not designed to protect from such attackers, since a 40-bit key can be discovered through brute-force in a relatively short amount of time with moderate resources [3, 8]. However, manufacturers have already begun to extend WEP to support larger keys, and the dictionary attack is effective regardless of key size. (The size of the dictionary depends not on the size of the key, but only on the size of the IV, which is fixed by the standard at 24 bits.) Further, the dictionary attack can be made more practical by exploiting the behavior of PCMCIA cards that reset the IV to 0 each time they are reinitialized. Since typical use of PCMCIA cards includes reinitialization at least once per day, building a dictionary for only the first few thousand IV’s will enable an attacker to decrypt most of the traffic directed towards the access point. In an installation with many 802.11 clients, collisions in the first few thousand IV’s will be plentiful. ### 3.2 Key Management The 802.11 standard does not specify how distribution of keys is to be accomplished. It relies on an external mechanism to populate a globally-shared array of 4 keys. Each message contains a key identifier field specifying the index in the array of the key being used. The standard also allows for an array that associates a unique key with each mobile station; however, this option is not widely supported. In practice, most installations use a single key for an entire network. This practice seriously impacts the security of the system, since a secret that is shared among many users cannot stay very well hidden. Some network administrators try to ameliorate this problem by not revealing the secret key to end users, but rather configuring their machines with the key themselves. This, however, yields only a marginal improvement, since the keys are still stored on the users’ computers. As anecdotal evidence, we know of a group of graduate students who reverse-engineered the network key merely for the convenience of being able to use unsupported operating systems. The reuse of a single key by many users also helps make the attacks in this section more practical, since it increases chances of IV collision. The chance of random collisions increases proportionally to the number of users; even worse, PCMCIA cards that reset the IV to 0 each time they are reinitialized will all reuse keystreams corresponding to a small range of low-numbered IV’s. Also, the fact that many users share the same key means that it is difficult to replace compromised key material. Since changing a key requires every single user to reconfigure their wireless network drivers, such updates will be infrequent. In practice, we expect that it may be months, or even longer, between key changes, allowing an attacker more time to analyze the traffic and look for instances of keystream reuse. ### 3.3 Summary The attacks in this section demonstrate that the use of stream ciphers is dangerous, because the reuse of keystream can have devastating consequences. Any protocol that uses a stream cipher must take special care to ensure that keystream never gets reused. This property can be difficult to enforce. The WEP protocol contains vulnerabilities despite the designers’ apparent knowledge of the dangers of keystream reuse attacks. Nor is it the first protocol to fall prey to stream-cipher-based attacks; see, for example, the analysis of an earlier version of the Microsoft PPTP protocol [18]. In light of this, a protocol designer should give careful consideration to the complications that the use of stream ciphers adds to a protocol when choosing an encryption algorithm. 4. MESSAGE AUTHENTICATION The WEP protocol uses an integrity checksum field to ensure that packets do not get modified in transit. The checksum is implemented as a CRC-32 checksum, which is part of the encrypted payload of the packet. We will argue below that a CRC checksum is insufficient to ensure that an attacker cannot tamper with a message: it is not a cryptographically secure authentication code. CRC’s are designed to detect random errors in the message; however, they are not resilient against malicious attacks. As we will demonstrate, this vulnerability of CRC is exacerbated by the fact that the message payload is encrypted using a stream cipher. 4.1 Message Modification First, we show that messages may be modified in transit without detection, in violation of the security goals. We use the following property of the WEP checksum: **PROPERTY 1.** The WEP checksum is a linear function of the message. By this, we mean that checksumming distributes over the XOR operation, i.e., \( c(x \oplus y) = c(x) \oplus c(y) \) for all choices of \( x \) and \( y \). This is a general property of all CRC checksums. One consequence of the above property is that it becomes possible to make controlled modifications to a ciphertext without disrupting the checksum. Let’s fix our attention on a ciphertext \( C \) which we have intercepted before it could reach its destination: \[ A \rightarrow (B) : \langle v, C \rangle. \] We assume that \( C \) corresponds to some unknown message \( M \), so that \[ C = \text{RC4}(v, k) \oplus \langle M, c(M) \rangle. \tag{1} \] We claim that it is possible to find a new ciphertext \( C' \) that decrypts to \( M' \), where \( M' = M \oplus \Delta \) and \( \Delta \) may be chosen arbitrarily by the attacker. Then, we will be able to replace the original transmission with our new ciphertext by spoofing the source, \[ (A) \rightarrow B : \langle v, C' \rangle, \] and upon decryption, the recipient \( B \) will obtain the modified message \( M' \) with the correct checksum. All that remains is to describe how to obtain \( C' \) from \( C \) so that \( C' \) decrypts to \( M' \) instead of \( M \). The key observation is to note that stream ciphers, such as RC4, are also linear, so we can reorder many terms. We suggest the following trick: XOR the quantity \( \langle \Delta, c(\Delta) \rangle \) against both sides of Equation 1 above to get a new ciphertext \( C' \): \[ C' = C \oplus \langle \Delta, c(\Delta) \rangle \\ = \text{RC4}(v, k) \oplus \langle M, c(M) \rangle \oplus \langle \Delta, c(\Delta) \rangle \\ = \text{RC4}(v, k) \oplus \langle M \oplus \Delta, c(M) \oplus c(\Delta) \rangle \\ = \text{RC4}(v, k) \oplus \langle M', c(M \oplus \Delta) \rangle \\ = \text{RC4}(v, k) \oplus \langle M', c(M') \rangle. \] In this derivation, we used the fact that the WEP checksum is linear, so that \( c(M) \oplus c(\Delta) = c(M \oplus \Delta) \). As a result, we have shown how to modify \( C \) to obtain a new ciphertext \( C' \) that will decrypt to \( P \oplus \Delta \). This implies that we can make arbitrary modifications to an encrypted message without fear of detection. Thus, the WEP checksum fails to protect data integrity, one of the three main goals of the WEP protocol (see Section 2.1). Notice that this attack can be applied without full knowledge of \( M \): the attacker only needs to know the original ciphertext \( C \) and the desired plaintext difference \( \Delta \), in order to calculate \( C' = C \oplus \langle \Delta, c(\Delta) \rangle \). For example, to flip the first bit of a message, the attacker can set \( \Delta = 1000 \cdots 0 \). This allows an attacker to modify a packet with only partial knowledge of its contents. 4.2 Message Injection Next, we show that WEP does not provide secure access control. We use the following property of the WEP checksum: **PROPERTY 2.** The WEP checksum is an unkeyed function of the message. As a consequence, the checksum field can also be computed by the adversary who knows the message. This property of the WEP integrity checksum allows the circumvention of access control measures. If an attacker can get ahold of an entire plaintext corresponding to some transmitted frame, he will then be able to inject arbitrary traffic into the network. As we saw in Section 3, knowledge of both the plaintext and ciphertext reveals the keystream. This keystream can subsequently be reused to create a new packet, using the same IV. That is, if the attacker ever learns the complete plaintext \( P \) of any given ciphertext packet \( C \), he can recover keystream used to encrypt the packet: \[ P \oplus C = P \oplus (P \oplus \text{RC4}(v, k)) = \text{RC4}(v, k). \] He can now construct an encryption of a message \( M' \): \[ (A) \rightarrow B : \langle v, C' \rangle, \] where \[ C' = \langle M', c(M') \rangle \oplus \text{RC4}(v, k). \] Note that the rogue message uses the same IV value as the original one. However, we can appeal to the following behaviour of WEP access points: **PROPERTY 3.** It is possible to reuse old IV values without triggering any alarms at the receiver. Therefore, it is not necessary to block the reception of the original message. Once we know an IV \( v \) along with its corresponding keystream sequence \( \text{RC4}(v, k) \), this property allows us to reuse the keystream indefinitely and circumvent the WEP access control mechanism. A natural defense against this attack would be to disallow the reuse of IV’s in multiple packets, and require that all receivers enforce this prohibition.\footnote{There are sophisticated physical layer attacks that may be able to monitor a packet being sent and jam the receiver at the same time; at best such attacks would allow to reuse an IV once.} However, the 802.11 standard does not do this. While the 802.11 standard strongly recommends against IV reuse, it does not require it to change with every packet. Hence, every receiver must accept repeated IV’s or risk non-interoperability with compliant devices. We consider this a flaw in the 802.11 standard. In networking one often hears the rule of thumb “be conservative in what you send, and liberal in what you accept.” However, when security is a goal, this guideline can be very dangerous: being liberal in what one accepts means that each low-security option offered by the standard must be supported by everyone, and is thus available to the attacker. This situation is analogous to the ciphersuite rollback attacks on SSL [23], which also made use of a standard that included both high-security and low-security options. Consequently, to avoid security at the least-common denominator level, we suggest that the 802.11 standard should be more specific about forbidding IV reuse and other dangerous behavior. Note that in this attack we do not rely on Property 1 of the WEP checksum (linearity). In fact, substituting any unkeyed function in place of the CRC will have no effect on the viability of the attack. Only a keyed message authentication code (MAC) such as SHA1-HMAC [13] will offer sufficient strength to prevent this attack. Simon et. al had earlier warned in independent work that, given known plaintext for a single packet, one can use Property 2 to forge packets until the IV changes [19], and they too recommended replacing WEP’s checksum with a MAC. However, they did not appear to recognize the possibility to replay old IV values indefinitely (Property 3), which heightens the impact of this attack. ### 4.3 Authentication Spoofing A special case of the message injection attack can be used to defeat the shared-key authentication mechanism used by WEP. The mechanism is used by access points to authenticate mobile stations before allowing them to form an association. After a mobile station requests shared-key authentication, the access point sends it a \textit{challenge}, a 128-byte random string, in cleartext. The mobile station then needs to respond with the same challenge encrypted using WEP. The authentication succeeds if the decryption of the response calculated at the access point matches the challenge. The ability to generate a an encrypted version of the challenge is considered proof of possession of a key. However, as described in the previous section, it is possible to inject properly encrypted WEP messages without the key. All that is necessary is knowledge of a plaintext/ciphertext pair of the requisite length. It is easy to obtain such a pair by monitoring a legitimate authentication sequence: the attacker learns both the plaintext challenge sent by the access point and the encrypted version sent by the mobile station. From this, it is easy to derive the keystream used to encrypt the response. Since all authentication responses are of the same length, the recovered keystream will be sufficient to create a proper response for a new challenge (received in plaintext). Therefore, after intercepting a single authentication sequence using a particular key, the attacker can authenticate himself with that key indefinitely. This is a particularly serious problem when the same shared key is used by all mobile stations, which is frequently the case in practice. This attack on the authentication protocol was also discovered independently by Arbaugh et al. [2] based on a preliminary version of our results. ### 4.4 Message Decryption What may be surprising is that the ability to modify encrypted packets without detection can also be leveraged to decrypt messages sent over the air. Consider WEP from the point of view of the adversary. Since WEP uses a stream cipher presumed to be secure (RC4), attacking the cryptography directly is probably hopeless. But if we cannot decrypt the traffic ourselves, there is still someone who can: the access point. In any cryptographic protocol, the legitimate decryptor must always possess the secret key in order to decrypt, by design. The idea, then, is to trick the access point into decrypting some ciphertext for us. As it turns out, the ability to modify transmitted packets provides two easy ways to exploit the access point in this way. #### 4.4.1 IP redirection The first way is called an “IP redirection” attack, and can be used when the WEP access point acts as a IP router with Internet connectivity; note that this is a fairly common scenario in practice, because WEP is typically used to provide network access for mobile laptop users and others. In this case, the idea is to sniff an encrypted packet off the air, and use the technique of Section 4.1 to modify it so that it has a new destination address: one the attacker controls. The access point will then decrypt the packet, and send the packet off to its (new) destination, where the attacker can read the packet, now in the clear. Note that our modified packet will be traveling \textit{from} the wireless network \textit{to} the Internet, and so most firewalls will allow it to pass unmolested. The easiest way to modify the destination IP address is to figure out what the original destination IP address is, and then apply the technique of Section 4.1 to change it to the desired one. Figuring out the original destination IP address is usually not difficult: all of the incoming traffic, for example, will be destined for an IP address on the wireless subnet, which should be easy to determine. Once the incoming traffic is decrypted, the IP addresses of the other ends of the connections will be revealed, and outgoing traffic can then be decrypted in the same manner. In order for this attack to work, however, we need to not only modify the destination IP address, but also to ensure that the IP checksum in the modified packet is still correct—otherwise, the decrypted packet will be dropped by the access point. Since the modified packet differs from the original packet only in its destination IP address, and since both the old and new values for the destination IP address are known, we can calculate the required change to the IP checksum caused by this change in IP address. Suppose the high and low 16-bit words of the original destination IP address were $D_H$ and $D_L$, and we wish to change them to $D'_H$ and $D'_L$. If the old IP checksum was $\chi$ (which we do not necessarily know, since it is encrypted), the new one should be $$\chi' = \chi + D'_H + D'_L - D_H - D_L$$ (where the additions and subtractions here and below are one’s-complement) [5, 14]. The trick is that we only know how to modify a packet by applying an XOR to it, and we don’t necessarily know what we need to XOR to $\chi$ to get $\chi'$, even though we do know what we would need to add (namely, $D'_H + D'_L - D_H - D_L$). We now discuss three ways to try to correct the IP checksum of the modified packet: **The IP checksum for the original packet is known:** If it happens to be the case that we somehow know $\chi$, then we simply calculate $\chi'$ as above, and modify the packet by XORing in $\chi \oplus \chi'$, which will change the IP checksum to the correct value of $\chi'$. **The original IP checksum is not known:** If $\chi$ is not known, the task is harder. Given $\xi = \chi' - \chi$, we need to calculate $\Delta = \chi' \oplus \chi$. In fact, there is not enough information to calculate $\Delta$ given only $\xi$. For example, if $\xi = 0xCAFE$, it could be that: - $\chi' = 0xCAFE, \chi = 0x0000$, so $\Delta = 0xCAFE$ - $\chi' = 0xD00D, \chi = 0x050F$, so $\Delta = 0xD502$ - $\chi' = 0x1EE7, \chi = 0x53E8$, so $\Delta = 0x4D0F$ - ... However, not all $2^{16}$ values for $\Delta$ are possible, and some are much more likely than others. In the above example, there are four values for $\Delta$ ($0x3501, 0x4B01, 0x4D01, 0x5501$) which occur more than 3% of the time each. Further, we are free to make multiple attempts—any incorrect guesses will be silently ignored by the access point. Depending on the value of $\xi$, a small number of attempts can succeed with high probability. Finally, a successful decryption of one packet can be used to bootstrap the decryption of others; for example, in a stream of communication between two hosts, the only field in the IP header that changes is the identification field. Thus, knowledge of the full IP header of one packet can be used to predict the full header of the surrounding packets, or narrow it down to a small number of possibilities. **Arrange that $\chi = \chi'$:** Another possibility is to compensate for the change in the destination field by a change in another field, such that the checksum of the packet remains the same. Any header field that is known to us and does not affect packet delivery is suitable, for example, the source IP address. Assuming the source IP address of the packet to be decrypted is also known (we can obtain it, for example, by performing the attack in the previous item on one packet to decrypt it completely, and then using this simpler attack on subsequent packets once we read the source address from the first one), we simply subtract $\xi$ from the low 16-bit word of the source IP address, and the resulting packet will have the same IP checksum as the original. However, it is possible that modifying the source address in this way will cause a packet to be dropped based on egress filtering rules; other header fields could potentially be used instead. Highly resourceful attackers with monitoring access to an entire class B network can even perform the necessary adjustments in the destination field alone, by choosing $D'_L = D_H + D_L - D'_H$. For example, if the original destination address in a packet is 10.20.30.40 and the attacker holds control over the 192.168.0.0/16 subnet, selecting the address 192.168.103.147 results in identical IP header checksum values, and the packet will be delivered to an address he controls. ### 4.4.2 Reaction attacks There is another way to manipulate the access point and break WEP-encrypted traffic that is applicable whenever WEP is used to protect TCP/IP traffic. This attack does not require connectivity to the Internet, so it may apply even when IP redirection attacks are impossible. However, it is effective only against TCP traffic; other IP protocols cannot be decrypted using this attack. In our attack, we monitor the reaction of a recipient of a TCP packet and use what we observe to infer information about the unknown plaintext. Our attack relies on the fact that a TCP packet is accepted only if the TCP checksum is correct, and when it is accepted, an acknowledgement packet is sent in response. Note that acknowledgement packets are easily identified by their size, without requiring decryption. Thus, the reaction of the recipient will disclose whether the TCP checksum was valid when the packet was decrypted. The attack, then, proceeds as follows. We intercept a ciphertext $\langle v, C \rangle$ with unknown decryption $P$: $$A \rightarrow (B) : \langle v, C \rangle.$$ We flip a few bits in $C$ and adjust the encrypted CRC accordingly to obtain a new ciphertext $C'$ with valid WEP checksum. We transmit $C'$ in a forged packet to the access point: $$(A) \rightarrow B : \langle v, C' \rangle.$$ Finally, we watch to see whether the eventual recipient sends back a TCP ACK (acknowledgement) packet; this will allow us to tell whether the modified text passed the TCP checksum and was accepted by the recipient. Note that we may choose which bits of $C$ to flip in any way we like, using techniques from Section 4.1. The key technical observation is as follows: By a clever choice of bit positions to flip, we can ensure that the TCP checksum remains undisturbed exactly when the one-bit condition $P_i \oplus P_{i+16} = 1$ on the plaintext holds. Thus, the presence or absence of an ACK packet will reveal one bit of information on the unknown plaintext $P$. By repeating the attack for many choices of $i$, we can learn almost all of the plaintext $P$, and then deducing the few remaining unknown bits will be easy using classical techniques. We explain later precisely how to choose which bits to flip. For now, the details are not terribly important. Instead, the main point is that we have exploited the receiver’s willingness to decrypt arbitrary ciphertexts and feed them to another component of the system that leaks a tiny bit of information about its inputs. The recipient’s reaction to our forged packet—either acknowledging or ignoring it—can be viewed as a side channel, similar to those exploited in timing and power consumption attacks [11, 12], that allows us to learn information about the unknown plaintext. Thus, we have used the recipient as an oracle to unknowingly decrypt the intercepted ciphertext for us. This is known as a *reaction attack*, as it works by monitoring the recipient’s reaction to our forgeries. Reaction attacks were initially discovered by Bellovin and Wagner in the context of the IP Security protocol, where their existence was blamed on the use of encryption without also using a MAC for message authentication [4]. As a result, Bellovin proposed a design principle for IP Security: all encryption modes of operation should also use a MAC. It seems that the same rule of thumb applies to the WEP protocol as well, for the presence of a secure MAC (rather than the insecure CRC checksum) would have prevented these attacks. **The technical details.** We have deferred until now the technical details on how to choose new forged packets $C'$ to trick the recipient into revealing information about the unknown plaintext $P$. Recall that the TCP checksum is the one’s-complement addition of the 16-bit words of the message $M$. Moreover, one’s-complement addition behaves roughly equivalently to addition modulo $2^{16} - 1$. Hence, roughly speaking, the TCP checksum on a plaintext $P$ is valid only when $P \equiv 0 \mod 2^{16} - 1$. We let $C' = C \oplus \Delta$, so that $\Delta$ specifies which bit positions to flip, and we choose $\Delta$ as follows: pick $i$ arbitrarily, set bit positions $i$ and $i + 16$ of $\Delta$ to one, and let $\Delta$ be zero elsewhere. It is a convenient property of addition modulo $2^{16} - 1$ that $P \oplus \Delta \equiv P \mod 2^{16} - 1$ holds exactly when $P_i \oplus P_{i+16} = 1$. Since we assume that the TCP checksum is valid for the original packet (i.e., $P \equiv 0 \mod 2^{16} - 1$), this means that the TCP checksum will be valid for the new packet (i.e., $P \oplus \Delta \equiv 0 \mod 2^{16} - 1$) just when $P_i \oplus P_{i+16} = 1$. This gives us our one bit of information on the plaintext, as claimed. ### 4.5 Summary In this section, we have shown the importance of using a cryptographically secure message authentication code, such as SHA1-HMAC [13], to protect integrity of transmissions. The use of CRC is wholly inappropriate for this purpose, and in fact any unkeyed function falls short from defending against all of the attacks in this section. A secure MAC is particularly important in view of composition of protocols, since the lack of message integrity in one layer of the system can lead to breach of secrecy in the larger system. ## 5. COUNTERMEASURES There are configuration options available to a network administrator that can reduce the viability of the attacks we described. The best alternative is to place the wireless network outside of the organization firewall. Instead of trying to secure the wireless infrastructure, it is simpler to consider it to be as much of a threat as other hosts on the Internet. The typical clients of a wireless network are portable computers that are mobile by their nature, and will frequently employ a Virtual Private Network (VPN) solution to access hosts inside the firewall when accessing via dial-up or from a remote site. Requiring that the same VPN be used to access the internal network when connected over 802.11 obviates the need for link-layer security, and reuses a well-studied mechanism. To provide access control, the network can be configured such that no routes to the outside Internet exist from the wireless network. This prevents people within radio range of the wireless infrastructure from usurping potentially costly Internet connection bandwidth, requiring VPN use for any outside access. (However, it may be desirable to allow visitors to access the Internet wirelessly without additional administrative setup.) A useful additional measure is to improve the key management of a wireless installation. If possible, every host should have its own encryption key, and keys should be changed with high frequency. The design of a secure and easy-to-use mechanism for automated key distribution to all users is a good subject for further research. Note, though, that good key management alone cannot solve all of the problems described in this paper; in particular, the attacks from section 4 remain applicable. ## 6. LESSONS The attacks in this paper serve to demonstrate a fact that has been well-known in the cryptography community: design of secure protocols is difficult, and fraught with many complications. It requires special expertise beyond that acquired in engineering network protocols. A good understanding of cryptographic primitives and their properties is critical. From a purely engineering perspective, the use of CRC-32 and RC4 can be justified by their speed and ease of implementation. However, many of the attacks we have described rely on the properties of stream ciphers and CRC’s, and would be rendered ineffective, or at least more difficult, by the use of other algorithms. There are also more subtle interactions of engineering decisions that are not directly related to the use of cryptography. For example, being stateless and being liberal in what a protocol accepts are well-established principles in network engineering. But from a security standpoint, both of these principles are dangerous, since they give an attacker more freedom to operate, and indeed, the traffic injection attacks capitalize on this freedom. Security is a property of an entire system, and every decision must be examined with security in mind. The setting of WEP makes a secure design particularly difficult. A link-layer protocol must take into account interactions with many different entities at the same time. The IP redirection attack relies on collaboration between an agent injecting messages at the link-layer and a host somewhere the Internet. The complex functionality of a 802.11 access point makes it susceptible to such attacks from all sides. Faced with such difficulties, even the most experienced of security professionals can make serious errors. Recognizing this fact, the accepted practice is to rely on the expertise of others to improve the security of protocols. Two important ways to do this is to reuse past design and to offer new designs for public reviews. Past designs should be reused whenever possible. A common tenet of protocol design is “don’t do it.” WEP could have benefitted from the experience gained in the design of the IP Security Protocol (IPSEC) [10]. Although the goals of IPSEC are somewhat different, it also aims to provide link-layer security, and as such needs to deal with many of the same issues as WEP. Even if the protocol could not be reused as-is, a review of its design and past analysis would have been very instructive. Some of the previously published problems in IPSEC [4] share many similarities with the attacks presented in this paper. Public review is also of great importance. If WEP had been examined by the cryptographic community before it was enacted into an international standard, many of the flaws would have been almost surely eliminated. (For example, the dangers of using a CRC to ensure message integrity are well-known [9, 21, 6].) While we applaud the fact that the standard is open, there are still barriers to public review. A security researcher is faced with a financial burden to even attempt to examine the standard—the cost of the document is in the hundreds of dollars. This is the opposite of what should be—a working group developing a new security protocol should proactively invite the security community to analyze it. 7. CONCLUSIONS In this paper, we have demonstrated major security flaws in the WEP protocol and described several practical attacks that result. Consequently, we recommend that WEP should not be counted on to provide strong link-level security, and that additional precautions be taken to protect network traffic. We hope that our discoveries will motivate a redesign of the WEP protocol to address the vulnerabilities that we found. Our further hope is that this paper will expose important security principles and design practices to a wide audience, and that the lessons we identify will benefit future designers of both WEP and other mobile communications security protocols. 8. ACKNOWLEDGEMENTS We would like to thank Mike Chen and Anthony Joseph for helping us get access to the 802.11 standard; Matt Welsh and Alec Woo for providing some of the testing equipment; Bernard Aboba and Jesse Walker for keeping us apprised of 802.11 standards body activity; and Adam Shostack and the anonymous referees for their helpful comments on earlier versions of this paper. 9. REFERENCES [1] W. A. Arbaugh. An inductive chosen plaintext attack against WEP/WEP2. IEEE Document 802.11-01/230, May 2001. [2] W. A. Arbaugh, N. Shankar, and Y. J. Wan. Your 802.11 wireless network has no clothes. http://www.cs.umd.edu/~waa/wireless.pdf, Mar. 2001. [3] A. Beck. Netscape’s export SSL broken by 120 workstations and one student. HPCwire, Aug. 22 1995. [4] S. M. Bellovin. Problem areas for the IP security protocols. In 6th USENIX Security Symposium, San Jose, California, July 1996. USENIX. [5] B. Braden, D. Borman, and C. Partridge. Computing the internet checksum. Internet Request for Comments RFC 1071, Internet Engineering Task Force, Sept. 1988. [6] Core SDI. crc32 compensation attack against ssh-1.5. http://www.core-sdi.com/soft/ssh/attack.txt, July 1998. [7] E. Dawson and L. Nielsen. Automated cryptanalysis of XOR plaintext strings. Cryptologia, (2):165–181, Apr. 1996. [8] D. Doligez. SSL challenge virtual press conference. http://pauillac.inria.fr/~doligez/ssl/press-conf.html, 1995. [9] R. Jueneman, S. Matyas, and C. Meyer. Message authentication. IEEE Communications Magazine, 23(9):29–40, Sept. 1985. [10] S. Kent and R. Atkinson. Security architecture for the Internet Protocol. Internet Request for Comment RFC 2401, Internet Engineering Task Force, Nov. 1998. [11] P. Kocher. Cryptanalysis of Diffie-Hellman, RSA, DSS, and other cryptosystems using timing attacks. In D. Coppersmith, editor, Advances in cryptology, CRYPTO ’95: 15th Annual International Cryptology Conference, Santa Barbara, California, USA, August 27–31, 1995: proceedings, pages 171–183. Springer-Verlag, 1995. [12] P. Kocher, J. Jaffe, and B. Jun. Differential power analysis. In Proc. 19th International Advances in Cryptology Conference – CRYPTO ’99, pages 388–397, 1999. [13] H. Krawczyk, M. Bellare, and R. Canetti. HMAC: Keyed-hashing for message authentication. RFC 2104, Feb. 1997. [14] T. Mallory and A. Kullberg. Incremental updating of the internet checksum. Internet Request for Comments RFC 1141, Internet Engineering Task Force, Jan. 1990. [15] L. M. S. C. of the IEEE Computer Society. Wireless LAN medium access control (MAC) and physical layer (PHY) specifications. IEEE Standard 802.11, 1999 Edition, 1999. [16] R. L. Rivest. The RC4 Encryption Algorithm. RSA Data Security, Inc., Mar. 12, 1992. (Proprietary). [17] B. Schneier. Applied Cryptography: Protocols, Algorithms and Source Code in C. John Wiley and Sons, Inc., New York, NY, USA, second edition, 1996. [18] B. Schneier and Mudge. Cryptanalysis of Microsoft’s Point-to-Point Tunneling Protocol (PPTP). In 5th ACM Conference on Computer and Communications Security, pages 132–140, San Francisco, California, Nov. 1998. ACM Press. [19] D. Simon, B. Aboba, and T. Moore. IEEE 802.11 security and 802.1X. IEEE Document 802.11-00/034r1, Mar. 2000. [20] S. Singh. The code book: the evolution of secrecy from Mary, Queen of Scots, to quantum cryptography. Doubleday, New York, NY, USA, 1999. [21] S. G. Stubblebine and V. D. Gligor. On message integrity in cryptographic protocols. In Proc. IEEE Symposium on Research in Security and Privacy, pages 85–105, 1992. [22] W. Tutte. FISH and I, 1998. A transcript of Tutte’s June 19, 1998 lecture at the University of Waterloo. [23] D. Wagner and B. Schneier. Analysis of the SSL 3.0 protocol. In Proceedings of the 2nd USENIX Workshop on Electronic Commerce (EC-96), pages 29–40, Berkeley, Nov. 18–21 1996. USENIX Association. [24] J. R. Walker. Unsafe at any key size; an analysis of the WEP encapsulation. IEEE Document 802.11-00/362, Oct. 2000.
Defining the Gesticon: Language and Gesture Coordination for Interacting Embodied Agents Brigitte Krenn, Hannes Pirker * Austrian Research Institute for Artificial Intelligence (ÖFAI) Freyung 6, A-1010 Vienna, Austria {brigitte,email@example.com Abstract In this paper we address problems of the automatic assignment of speech accompanying gestures and present solutions we have developed and still develop in the IST-project NECA. Special emphasis is put on the presentation of the central repository of information necessary for this assignment: the so called gesticon. 1 Introduction The task of automatic gesture assignment discussed in this paper, can be described as follows: Given a dialogue between two or more embodied agents, specify their non-verbal behavior by automatically selecting “appropriate” gestures and facial expressions from a given set. Take care of the temporal alignment of gestures with the spoken utterance and provide the information in a way that it subsequently can be used as input for an animation engine. The crucial task here is to design a gesture repository with representations general enough to be reusable in different multimodal generation systems and to be applicable in combination with different animation engines. What is required from the generation system is the availability of information on the dialogue structure, the dialogue related emotion, and the prosody and timing of speech. Our discussion will be centered around a) the design and representation structure of a central gesture repository, which we call gesticon in analogy to lexicon,\footnote{Other terms in use for a repository of gesture definitions are \textit{gestuary}, a term coined by (deRuiter, 1998) and subsequently employed by (Kopp and Wachsmuth, 2000), and \textit{gestionary}, a term used by Isabella Poggi (Poggi, 2002a) to refer to a dictionary of symbolic gestures, or \textit{dictionary of gestures} such as ‘The Berlin dictionary of Everyday Gestures’ (Posner et al., 2002) or ‘The Nonverbal Dictionary of Gestures, Signs & Body Language Cues’ (Givens, 2002).} b) the methods and strategies employed in gesture generation and the alignment of gestures with speech. As regards a), we discuss what information shall be represented in the gesticon, and how this information shall be structured and represented. As regards b), we make proposals for gesticon-based gesture generation and gesture timing in collaboration with multimodal natural language and speech generation. In order to do so, we introduce a general purpose multimodal representation language, the RRL (Rich Representation Language, see \url{http://www.oefai.at/NECA/RRL}), which is the back-bone of the whole gesture-assignment process and functions as an interface to the individual system components. Both gesticon and RRL are represented in XML format, thus ensuring compatibility with a variety of existing representation and standardisation efforts of multimodal information. For an overview see (Pirker and Krenn, 2002). Coupling gesticon and RRL also allows us to design a component which makes the gesture representations and gesture generation strategies and methods independent from implementation details of individual system modules. Thus, even though we describe gesture representation and gesture assignment in the context of the NECA system\footnote{\url{http://www.oefai.at/NECA/}}, our proposals are general in nature and not restricted to NECA. The paper is organized as follows: To set the context, we briefly introduce the NECA project (section 2.1) and the architecture of the NECA system (section 2.2). In section 2.3 we give an outline of gesture encoding in the RRL (Rich Representation Language), a general purpose multimodal representation and scripting language which has been developed in the NECA project. In sections 3.1 to 3.6 the overall gesticon structure is defined and the organization of gesture relevant information is discussed. Gesticon entries are exemplified in section 3.7. 2 The NECA System 2.1 Outline NECA (“Net Environment for Embodied Emotional Conversational Agents”) aims at the development of a toolkit that allows for time- and cost efficient implementation and adaption of Web-applications for the following scenario: Animated scenes are generated where two or more virtual human-like characters communicate with each other using expressive (emotionally rich) speech, gesture and facial expression. Due to bandwidth restrictions, the use of lean player technologies is necessary. For various reasons it is important that the system can easily be adapted to different player technologies. For instance, in different applications varied animation styles are preferred, the state-of-the-art in player technology is rapidly changing, improvements in bandwidth capacities increase the choice of web compatible player technology. Thus special emphasis needs to be put on keeping the influence of player-specific aspects as small as possible. In the two NECA demonstrators we currently work with two fairly different animation/player technologies, namely Charamel (\url{http://www.charamel.de}) and Macromedia Flash (\url{http://www.macromedia.com}). ### 2.2 Architecture Because in NECA whole dialogues are planned in advance in the way a playwright designs a scene, a strict pipeline-architecture as depicted in Figure 1 can be employed. The information between modules is passed on using NECA’s XML-compliant Rich Representation Language (cf. \url{http://www.oefai.at/NECA/RRL}, (Piwek et al., 2002)). First, the scene generation and an affective reasoning component (Gebhard et al., 2003) specify the dialogue acts to be produced and feed into the multi-modal natural language generator (M-NLG) (Piwek, 2003). M-NLG is responsible for the generation of the textual representation of the agent’s utterances as well as the selection of semantically motivated gestures and emotion-driven facial expressions. Relevant information is encoded in the `<function>`-element of a gesticon entry. The following concept-to-speech synthesis module (Schröder and Trouvain, 2003) does not only produce speech files containing emotional speech, but also provides full timing information, i.e., the exact position and duration of all phonemes, syllables, words, phrase boundaries and tonal accents.\footnote{The speech synthesis system MARY can be tested online at \url{http://mary.dfki.de}} This information is crucial for the Gesture Assignment module (GA). Here the final selection of gestures takes place and the animation is timed. Phonemes are mapped to visemes, tone accents are aligned with eyebrow raises, and selected parts of an intonation phrase are aligned with specific components of a gesture. The GA module makes use of information encoded in the `<form>`-element of a gesticon entry. Both M-NLG and GA make use of constraints encoded in the `<restrictions>`-element. After GA, a further component, the Animation Generator, produces the player-specific animation instructions. Player-specific information can be accessed via the `<playercode>`-element of a gesticon entry. While the input to this component is an RRL document, the output is code which can be directly rendered by the player employed. ### 2.3 Gesture Encoding in the RRL Generally speaking, dialogue accompanying gesture generation is a two-step process. 1. During multimodal language generation, gestures are selected on the basis of the semantic and pragmatic content of the utterances, and are symbolically linked to whatever entity is appropriate, e.g. a word or a sentence. 2. Based on the prosodic and temporal information produced by a speech synthesis component a fine-grained alignment between the verbal and nonverbal communication systems is performed. The relevant information is encoded by means of the RRL. The interplay of the different aspects of multimodal information is exemplified in the following. The RRL-snippet below illustrates the result of step 1) multimodal generation. ```xml <gesture identifier="hipshift" id="g001" aligntype="seg_before" alignto="s001"/> <gesture identifier="wave" id="g002" aligntype="par_end" alignto="s001"/> <sentence id="s001"> Hello, how are you? </sentence> ``` Two classes of gestures – identifier="hipshift" and identifier="wave" – have been selected to accompany the sentence “Hello, how are you?”. This is specified via the value of the `alignto` attribute, i.e., the unique "id" of the sentence. The *aligntype* attribute designates the temporal relationship between gesture and anchor element. In this case the gesture "hipshift" would be realised before sentence "s001" starts and the gesture "wave" should end when the sentence stops. The speech synthesis system produces the according soundfile for the sentence, and also provides information on its internal structure (syllables and phonemes) as well as information on the location and type of tonal accents and prosodic phrase boundaries, represented in ToBI format (Baumann et al., 2001). See the RRL representation below. ```xml <sentence id="s001" src="s001.mp3"> <word id="w_1" accent="H*" pos="UH" sampa="h@l-’@U"> Hello <syllable id="syl_1" sampa="h@l"> <ph dur="75" p="h"/> <ph dur="48" p="@"/> <ph dur="100" p="l"/> </syllable> <syllable id="syl_2" sampa="’@U" stress="1" accent="H*"> <ph dur="230" p="@U"/> </syllable> </word> <prosBoundary breakindex="4" dur="200" p="_" tone="H-L%"/> <word id="w_2" ... /> ... </sentence> ``` With the availability of exact phoneme durations the alignment-specifications produced by multimodal generation can now – in step 2) of the gesture assignment process – be transformed into concrete time-measures. More sophisticated alignto-types can be processed such as the alignment of a certain gesture component to the syllable which bears the nuclear accent of a phrase, information not available at step 1) of gesture processing. The output then is an unambiguous specification of the animation stream, which is expressed by means of a subset of W3C's Synchronized Multimedia Integration Language (SMIL 2.0, http://www.w3.org/TR/smil20/), i.e., via a collection of `<seq>` and `<par>` elements. At this step all linguistic information is discarded and replaced by an `<audio>`-element which holds the name and duration of the speech soundfile. The symbolic alignment between gestures and language-related entities (e.g. sentences, words, syllables) is replaced by the specification of the exact temporal alignment between this `<audio>`-element and the according `<gesture>`-objects. The example from above would render to: ```xml <animationSpec> <seq> <gesture key="g023" identifier="hipshift" id="g001" dur="1650"/> <par> <audio src="s001.mp3" dur="1459"/> <seq> <!-- visemes --> <viseme identifier="v_h" dur="75"/> <viseme identifier="v_@" dur="48"/> <viseme identifier="v_l" dur="100"/> <viseme identifier="v_@U" dur="230"/> ... </seq> <gesture key="g012" identifier="wave" id="g002" begin="259" dur="1200"/> </par> </seq> </animationSpec> ``` It can be seen, that the `<sentence>`-element of the input is now replaced by an `<audio>`-element, which refers to the soundfile to be played. The sequence of visemes is of course parallel to the audio-element, and the align-type "par-end" for the "wave"-gesture is reflected by the temporal offset specified in its `begin`-attribute. The `id` attributes used as unique identifiers throughout the processing are redundant at this stage, and are kept for debugging purposes only. ### 3 The Gesticon As already indicated, the gesticon is designed as a general repository of meaningful bits and pieces of animation descriptions which are relevant for the generation of dialogue accompanying nonverbal behaviour. In other words, the gesticon is the direct equivalent to the lexicon in language-processing systems. As the latter is a mapping from phonetic form to the meaning of words, the gesticon represents the mapping between the form and the semantics of a gesture. In analogy to words in a dictionary, gesticon entries store information about the form (phonology), the meaning (semantics), the combinatory properties (syntax) and the pragmatics of gestures. Thus our conception of gesticon corresponds to Poggi's notion of (gesture) 'lexicon'. In (Poggi, 2002b) it reads In a "codified" communication system, the signal-meaning link is shared and coded in the memory of both a Sender and an Addressee (as it is the case, for example, with words or symbolic gestures) and a whole set of these links makes a “lexicon”. Note though, that Poggi’s work focuses mainly on a verbal description of symbolic (emblematic) gestures, i.e., gestures with a conventionalized meaning within a certain community such as ‘thumbs up’ meaning ‘o.k.’ in many western countries. In contrast we aim at a machine readable gesture repository, which functions as the basic resource for the automatic generation of all different types of gestures. With the gesticon we propose the foundations for a framework for the uniform symbolic representation of different nonverbal communication systems such as gesture and facial expression. Without doubt, descriptive work such as the one by Poggi or the descriptions available in the Berlin dictionary of Everyday Gestures (Posner et al., 2002) will be valuable resources to instantiate the gesticon structure. As a precondition, however, these works need to be made machine-readable. Another open question is how effectively the textual descriptions can be transformed into appropriate entries for automatic gesture generation. In the following we present the general structure of a gesticon entry and discuss the representational details of entries for facial expression and gesture. An illustrative example is provided in section 3.7. The gesticon is represented in XML format. Each entry comprises a form, a function and a restriction element, and pointers to playerspecific representations. The fact that currently only information on facial expressions and hand-arm gestures is represented in the gesticon results from the NECA context where animated characters do not move within the scene. ### 3.1 Overall Structure of a Gesticon Entry We propose the following overall structure for a gesticon entry. ```xml <gesticonEntry> <verbatim/> <function/> <form/> <restrictions/> <playercode/> </gesticonEntry> ``` The attributes `key` and `identifier` in the `gesticonEntry` are both used for naming the entry. The first is the entry’s unique key, while the identifier is used as common name for gestures that share the same meaning, i.e., there can be numerous gestures with the identifier “greeting”. Gesticon entries are classified according to the main modality expressed. This information is specified via the `modality` attribute. In our examples the value is either “arms” which means the entry is a representation of a gesture or “face” which indicates that the entry is a representation of a facial expression. In the context of NECA, a further modality is “body”, which stands for posture such as relaxed versus upright, etc. In the long run, however, the modality “body” needs to be further subclassified, for example, into posture, movement, and spatial location. ### 3.2 The `<verbatim>`-element In the verbatim element, a verbal description of the gesticon entry is stored. This is information for the human reader. ### 3.3 The `<function>`-element The function element contains information about the meaning and type of an entry, where the entry is attached to, and which type of temporal alignment is to be used (before, after, parallel, etc.) The `type` attribute is not defined for facial expressions. As regards gestures, we distinguish between the following types: - deictic (indicative or pointing gesture) - beat (repetitive or rhythmic movement mainly coordinated with speech prosody) - iconic (a gesture which “bears a close formal relationship to the semantic content of speech” (McNeill, 1992) quoted after (Serenari, 2002), p. 57, e.g. the hands forming a box in order to depict a container) - emblematic (“gestures that have a specific social code of their own” (McNeill, 1985) quoted after (Serenari, 2002), p. 57, e.g. a nod meaning ‘yes’) - illustrator (e.g. a wave accompanying or substituting a greeting act; in our use illustrators are similar to emblems, but are less strict as regards their social or cultural norms than emblems) - metaphoric (“similar to iconics in that they present imagery, but present an image of an abstract concept” (McNeill, 1992) quoted after (Serenari, 2002), p. 57) - adaptor (“part of adaptive efforts to satisfy self or bodily needs, or to perform bodily actions, or to manage emotions, or to develop or maintain prototypic interpersonal contacts, or to learn instrumental activities” (Ekman and Friesen, 1968) quoted after (Serenari, 2002), p. 59) - idle (we have introduced a number of idle gestures which are selected when the animated characters “do nothing”, i.e., they are not engaged in a dialogue, they are waiting till data transmission is completed) Summing up, we have drawn the values for our *type* attribute mainly from work by Ekman/Friesen and McNeill, cf. (Ekman and Friesen, 1968), (McNeill, 1985), (McNeill, 1992). The selection was guided by practical decisions, i.e., which classification is useful in the context of the NECA demonstrators. In general the classification of gestures is somewhat controversial in the literature, see for instance (Krauss et al., 2000) or (Serenari, 2002) for an overview of gesture classifications. At the current stage of development, the values for the *meaning* attribute are simple atomic labels. Of course this is a shortcoming and reflects a rudimentary semantic classification of gestures and facial expression. This approach, however, is sufficient for the current stage of development of the NECA system. Especially for the generation of metaphoric gestures, however, the encoding of meaning via a symbolic label is inappropriate. Instead a more complex representation structure of the meaning and the pragmatics of gestures needs to be developed. Currently this is approached from different angles such as descriptive work as represented in (Poggi, 2002a) or work on coupling gesture recognition and gesture generation such as (Kopp et al., 2004). Meaning in gesticon entries for facial expression refers to the six basic emotions (happy, sad, anger, fear, disgust, surprise) known from (Ekman, 1993) and a few other labels which are appropriate in the context of the demonstrators such as ‘neutral’, ‘false laugh’, ‘melancholy’, ‘reproach’ etc. which are inspired by (Faigin, 1990). The *alignto* attribute is mainly used for gestures and specifies the type of entity the particular gesture shall be aligned to. This can be a sentence, a word, an accented syllable, etc. At the current stage of development of the NECA system, facial expressions are per default aligned at sentence level. In the *aligntype* attribute it is specified how a gesture G and an entity X from the verbal communication system are coupled together. | Attribute | Description | |---------------|-----------------------------------------------------------------------------| | par | G starts exactly when X starts | | par_end | G stops exactly when X stops | | par_adjust_to_fit | G’s duration is forced to be the same as X’s, i.e., they start and stop at the same time | | atstress | G is aligned to the STRESSED position of X | | seq_before | G is performed before X, i.e., G precedes X | | seq_after | G is performed after X, i.e., G succeeds X | ### 3.4 The <form>-element In the form element, information on the basic physical properties of a gesture or facial expression is specified. The form element comprises two sub-elements: the <position>-element providing information on static (spatial) aspects of a gesture or facial expression, and the <components>-element encoding information about the dynamics (the sub-parts and temporal properties) of a gesture or facial expression. As we treat facial entries as snapshots of facial expressions, the components element is reduced to the specification of a duration range and a default duration. The position element in facial entries specifies eyebrows (up, relaxed, center down, ...), eyes (relaxed, open wide, open narrow, ...) and mouth shapes (open smile, closed relaxed, pursed, ...). These values are inspired by (Faigin, 1990). An alternative, more fine grained representation of form information of the face are the Face Animation Parameters (FAPs) used in MPEG4, see for instance (Tekalp and Ostermann, 2000). This information can be used as extra filter for selecting appropriate facial expressions during multimodal generation. In the NECA system, however, facial expressions are currently selected according to the emotion specified for the individual dialogue acts by the affective reasoning component. Regarding gestures, the availability of information on the basic physical properties as encoded in the position element is a prerequisite for performing basic reasoning on the well-formedness of combinations of gesture. Minimal positional information is required to decide whether two gestures can be directly concatenated or whether the combination of two gestures requires an intermediate gesture for the sequence to look natural. Information on gesture dynamics as encoded in the components element is required for the calculation of the temporal alignment of gestures to speech as well as for modulation of the expressivity of a gesture. In the position element spatial information of gestures is encoded very coarsely specifying the position of the left and right wrists at the very beginning and end of a gesture. This is encoded by a two-dimensional grid (top, mid, down) × (center, outwards) distinguishing 6 possible positions per wrist. This information is required for reasoning on the necessary time of moving from the end-position of one gesture to the start-position of its successor. Depending on the available time and on the interpolation capabilities of the animation technology used, the information in the position element is employed to decide on either ruling out a particular gesture, directly interpolating between two gestures or inserting movements to neutral (idle) positions in between the gestures to be concatenated. The mechanism can also be extended in order to cope with gestures that rely on the existence of specific predecessors, e.g. return-movements from special gestures. For these an attribute *special* is added to the <start> or <end>-element, and it is enforced, that only gestures which share the same *special*-value can be combined. As already mentioned, our proposal to positional encoding of gesture information is a minimal approach. An example for a much more detailed encoding is MURML (Kransted et al., 2002). As both our gesticon structure and MURML are XML compliant, an enhancement of the proposed gesticon entries by MURML representations is straight forward. For the components element of gestures the following sub-elements are defined: prepare, stroke, hold, retract (cf. (McNeill, 1992)). Each of these elements has its duration element <dur> where an appropriate range and a default for the duration of the respective phase is specified in milliseconds. Note, that a majority of our gesticon entries are gesture fragments which only comprise stroke and hold phases, whereas the prepare and retract phases result from playerspecific interpolation between adjacent gestures. In general, stroke and hold are the most important phases for aligning gesture and speech. The stroke phase for instance is employed to fine-tune the timing of gesture and speech. The stroke phase is typically aligned with a particular (accented) syllable. In cases where a gesture needs to be elongated, the hold phase is of importance, as it will be unproportionally more affected than any other phase of a gesture. 3.5 The <restrictions>-element While in the function and form elements semantic and structural aspects of a gesture or facial expression are described, the restrictions element serves as a repository for all kinds of additional constraints that specify the applicability of a particular gesticon entry in the context of a specific system. For instance, in the NECA system for each dialogue act an emotion category is calculated by an affective reasoning component (Gebhard et al., 2003) implementing the OCC model (Ortony et al., 1988). These emotion categories need to be related to emotion expressing entries in the gesticon such as facial expressions and adaptor gestures, so that appropriate nonverbal behaviours can be selected from the gesticon. This is reflected in the constraint element <constraint name="occ_emotion" val="..."/>. Another example is the activation constraint <constraint name="activation" val="..."/> by means of which we specify for which affective activation level or range a particular gesticon entry is applicable. The structure of the restrictions element is defined as follows: It holds a set of <constraint>-elements, which can be logically combined by bracketing <and>, <or> and <not>-elements (i.e. conjunction, disjunction and negation). In the current form each constraint element just contains an attribute name which holds the name of a constraint and an attribute value or range that is to be used as argument of that test. In order to facilitate the processing of the different constraints used under <restrictions> and to ensure consistency, maintainability and readability of the gesticon, a macro-mechanism is offered in the gesticon: For the most common type of constraints, namely the lookup of a certain value already stored in the RRL, the semantic of that constraint can be specified within the gesticon itself, using a separate <constraintCode>-section. The example in section 3.7 shows such a <constraintCode>-entry for the constraint "occ_emotion". It defines, what a program really has to do in order to test whether <constraint name="occ_emotion" val="anger"> is fulfilled: Under the current dialogueAct (this is the scope) look for the element <emotionExpressed> and test whether the value of its type-attribute equals "anger". For the constraint with the name "gender" it states, that the information on the speaker has to be dereferenced and that the gender value is to be found under the element <gender>, more precisely in the attribute type. This should facilitate the authoring of individual gesticon-entries and helps to keep constraint-entries consistent. The inclusion of novel constraints or changes in the structure of the RRL thus do not necessarily require changes in the code of the interpreting programs. 3.6 The <playercode>-element Finally, the necessary mapping to player-specific gesture-code is defined in the playercode element. For the players currently used in NECA, this element is very simple. For Charamel the playercode directly points at a animation-file, for Flash it contains the key to entries in an external gesture-repository. This playercode information is embedded in the SMIL-based timing specification and forms the output to the player-specific Animation Generator. 3.7 Example Gesticon Entries Gesture Entry ```xml <gesticonEntry key="g001" identifier="Thinking" modality="arms"> <verbatim> Thinking: adaptor: Tina: adaptor: moves right hand to chin but in addition left hand moves to shoulder-hight + palm up </verbatim> <function type="adaptor" alignto="sentence" aligntype="par" start="-200" meaning="think"/> </function> <form> <position> <!-- starts with D(own) O(ut) --> <start left="DO" right="DO"/> <!-- ends with T(op) C(enter) --> ``` Facial Expression Entry <gesticonEntry identifier="happy" key="18" modality="face" > <verbatim> flash eager_smile applicable to John and Vanessa </verbatim> <function> attach_to="sentence" aligntype="unknown" meaning="happy" </function> <form> <position> <eyebrows> <eyes> <mouth type="smile_open"/> </eyes> </eyebrows> </position> <components> <hold> <dur min="50" default="400" max="5000"/> </hold> </components> </form> <restrictions> <and> <or> <constraint typ="occ_emotion" val="joy"/> <constraint typ="occ_emotion" val="liking"/> </or> <constraint typ="activation" range="1.0:0.2"/> </and> </restrictions> </gesticonEntry> Constraint Code <constraintCodes mapgoal="neca_rrl.0.4"> <constraintCode name="gender" typ="attributeEquals" scope="speakerInfo" element="gender" attribute="type"> <verbatim> this specifies that the gender of the SPEAKER has to have a certain value </verbatim> </constraintCode> <constraintCode name="occ_emotion" typ="attributeEquals" scope="dialogueAct" element="emotionExpressed" attribute="type"> <verbatim> for constraint "occ_emotion": look under emotionExpressed </verbatim> </constraintCode> ... </constraintCodes> 4 Conclusion Summing up, we have outlined an overall structure for a gesticon, a reusable, system independent repository of gesture snippets and facial expressions relevant for the generation of dialogue accompanying nonverbal behavior. To achieve a seamless integration of gesture and language we rely on XML-based gesture representations (the gesticon) that closely interact with the RRL, a multi-modal representation structure/language used as interface to the individual system components of a multimodal generation system for spoken dialogue. Both RRL and gesticon have been developed in the context of the NECA project, but are designed to be system independent. As regards the representation of the physical properties of gestures, our work draws upon MURML, but, for practical reasons, does not implement a similar level of detail. The general approach taken, however, allows for an extension to MURML. In contrast to MURML which concentrates on the representation of gestures, we aim at defining a uniform representation for gestures as well as facial expressions. Moreover, due to interlinking the gesticon and the RRL, we have defined a clearcut interface to individual processing components. The linking between gesture descriptions and an XML-compliant multimodal representation language relates our work to the work described in (Ruttkay et al., 2003). Here the scripting language STEP is used to define and process gestures for h-anim\textsuperscript{5} agents. While our aim is to separate the representation structure of a gesture repository from the processing and animation components, STEP representations are a genuine part of the STEP animation engine. Nevertheless it would be a beneficial exercise to separate out the STEP representations for gestures and incorporate the knowledge into the gesticon. On the one hand this would enhance the gesticon entries by the joint information available in h-anim and the dynamism of gestures encoded in STEP. On the other hand it would foster the understanding of which information shall be represented in a gesticon and which information belongs to a rule system for gesture generation. As regards language and gesture coordination, the approach presented in this paper is comparable to the one pursued in the BEAT system (Cassell et al., 2001). However, other than in BEAT where thematic structure is widely used for fine-tuning of gesture assignment, we strongly rely on the prosodic information (intonation phrases, accents) directly available from speech synthesis. Another recent system for dialogue related gesture animation utilizing an XML-based framework is presented in (Hartmann et al., 2002). This work also comes with its own gesture repository. All in all, a number of gesture repositories exist, typically being closely tied to specific gesture animation systems. Partially these repositories encode similar information, partially the information differs regarding the dimensions and the granularity of the representations. In the current situation, it would be an advantage for the work on ECAs if the community could agree on common representation structures for gesticons to decouple the gesture repositories from the individual gesture generation systems, and thus to enable the exchange of data sets. We hope that with the presented work we have made a small contribution to a common structure for gesticons which comprise definitions of elements of nonverbal communication systems (gestures, facial expressions etc.), rather than encode concrete body-specific or animation system-specific instances of such communication elements. Acknowledgments The Austrian Research Institute for Artificial Intelligence is supported by the Austrian Federal Ministry of Education, Science and Culture and the Federal Ministry for Transport, Innovation and Technology. The work reported in this paper is supported by the EC Project NECA IST-2000-28580. The information in this document is provided as is and no guarantee or warranty is given that the information is fit for any particular purpose. The user thereof uses the information at its sole risk and liability. References Stefan Baumann, Martine Grice, and Ralf Benzmüller. GToBI - a phonological system for the transcription of German intonation. In \textit{Proceedings of Prosody 2000. Speech Recognition and Synthesis}, pages 21–28, Poznan: Adam Mickiewicz University, Faculty of Modern Languages and Literature, 2001. Justine Cassell, Hannes Vilhjálmsson, and Timothy Bickmore. BEAT: The Behaviour Expression Animation Toolkit. In \textit{Proceedings of SIGGRAPH '01}, pages 477–486, 2001. Jan-Peter deRuiter. Gesture and Speech Production. MPI Series in Psycholinguistics. Technical report, Ph.D. dissertation, University of Nijmegen, 1998. Paul Ekman. Facial expression of emotion. \textit{American Psychologist}, 48:384–392, 1993. Paul Ekman and Wallace V. Friesen. Nonverbal behavior in psychotherapy research. In John M . Shlien, editor, \textit{Research in Psychotherapy: Vol. 3}, pages 179–216. American Psychological Association, 1968. Gary Faigin. \textit{The artist’s complete guide to facial expression}. Watson-Guptill Publications, 1990. Patrik Gebhard, Michael Kipp, Martin Klesen, and Thomas Rist. Adding the emotional dimension to scripting character dialogues. In \textit{Proceedings of IVA'03}, Kloster Irsee, Germany, 2003. David B. Givens. \textit{The Nonverbal Dictionary of Gestures, Signs & Body Language Cues}. Center for Nonverbal Studies Press, Spokane, Washington, 2002. Björn Hartmann, Maurizio Mancini, and Catherine Pelachaud. Formational parameters and adaptive prototype instantiation for mpeg-4 compliant gesture synthesis. In \textit{Proceedings of Computer Animation}, pages 111–119, 2002. Stefan Kopp, Timo Sowa, and Ipke Wachsmuth. Imitation games with an artificial agents: From mimicking to understanding shape-related iconic gestures. In Antonio Camurri and Gualtiero Volpe, editors, \textit{Gesture-Based Communication in Human-Computer Interaction, 5th International Gesture Workshop. Genova, Italy, April 15-17, 2003, Selected Revised Papers}, volume 2915 of \textit{Lecture Notes in Computer Science}, pages 436–447. Springer, 2004. \textsuperscript{5}See www.hanim.org. H-anim agents are built in VRML (www.web3d.org/vrml/vrml.htm). Stefan Kopp and Ipke Wachsmuth. A knowledge-based approach for lifelike gesture animation. In Werner Horn, editor, *Proceedings of the 14th European Conference on Artificial Intelligence*, pages 663–667, Berlin, Germany, 2000. Alfred Kransted, Stefan Kopp, and Ipke Wachsmuth. MURML: A Multimodal Utterance Representation Markup Language for Conversational Agents. In Andrew Marriott et al., editor, *Embodied Conversational Agents: Let's Specify and Compare Them!*, Workshop Notes AAMAS, Bologna, Italy, 2002. Robert M. Krauss, Yihsiu Chen, and Rebecca F. Gottesman. Lexical gestures and lexical access: A process model. In David McNeill, editor, *Language and gesture: Window into thought and action*, pages 261–283. Cambridge University Press, 2000. David McNeill. So you think gestures are nonverbal? *Psychological Review*, 92:350–371, 1985. David McNeill. *Hand and mind: What gestures reveal about thought*. University of Chicago Press, 1992. Andrew Ortony, Gerald L. Clore, and Allan Collins. *The Structure of Emotions*. Cambridge University Press, 1988. Hannes Pirker and Brigitte Krenn. Assessment of markup languages for avatars, multimedia and multimodal systems. Technical report, Austrian Research Institute for Artificial Intelligence, Vienna, 2002. Paul Piwek. A flexible pragmatics-driven language generator for animated agents. In *Proceedings of ACL-2003*, pages 151–154, East Stroudsburg, PA, 2003. Paul Piwek, Brigitte Krenn, Marc Schröder, Martine Grice, Stefan Baumann, and Hannes Pirker. RRL: A rich representation language for the description of agent behaviour in NECA. In Andrew Marriott et al., editor, *Embodied Conversational Agents: Let's Specify and Compare Them!*, Workshop Notes AAMAS, Bologna, Italy, 2002. Isabella Poggi. Symbolic gestures: The case of the Italian gestionary. *Gesture*, 2(1):71–98, 2002a. Isabella Poggi. Towards the alphabet and the lexicon of gestures, gaze and touch. In *Multimodality of Human Communication. Theories, problems and applications. Virtual Symposium edited by P.Bouissac* (http://www.semioticon.com/virtuals/index.html), University of Toronto, Victoria College, 2002b. Roland Posner, Reinhard Krüger, Thomas Noll, and Massimo Serenari. The Berlin Dictionary of Everyday Gestures. Version 9 2002. Technical report, Research Center for Semiotics, TU Berlin, 2002. Szofia Ruttkay, Zhisheng Huang, and Anton Eliens. Reusable gestures for interactive web agents. In Thomas Rist, Ruth Aylett, Daniel Ballin, and Jeff Rickel, editors, *Intelligent Virtual Agents, Proceedings of IVA 2003. LNAI 2792*, pages 80–87, Springer, 2003. Marc Schröder and Jürgen Trouvain. The German Text-to-Speech Synthesis System MARY: A Tool for Research, Development and Teaching. *International Journal of Speech Technology*, 6:365–377, 2003. Massimo Serenari. Survey of existing gesture, facial expression, and cross-modality coding schemes. Technical Report of the NITE project IST-2000-26095. Technical report, TU Berlin, 2002. Murat Tekalp and Joern Ostermann. Face and 2-D Mesh Animation in MPEG-4. *Signal Processing: Image Communication*, 15(4-5):387–421, 2000.
Abstract Solar neutrinos, generated abundantly by thermonuclear reactions in the solar interior, offer a unique tool for studying astrophysics and particle physics. The observation of solar neutrinos has led to the discovery of neutrino oscillation, a topic currently under active research, and it has been recognized by two Nobel Prizes. In this pedagogical introduction to solar neutrino physics, we will guide readers through several key questions: How are solar neutrinos produced? How are they detected? What is the solar neutrino problem, and how is it resolved by neutrino oscillation? This article also presents a brief overview of the theory of solar neutrino oscillation, the experimental achievements, new physics relevant to solar neutrinos, and the prospects in this field. Keywords: Solar neutrinos, Neutrino oscillation, MSW effect, Neutrino detection, Solar neutrino problem Objectives - Section 2 introduces the standard solar model, explains how neutrinos are produced in the Sun via thermonuclear reactions (pp chain and CNO cycles), and summarizes the predicted fluxes and energy spectra of solar neutrinos. - Section 3 introduces the detection of solar neutrinos, the history of solar neutrino experiments, and the well-known solar neutrino problem. - Section 4 introduces the standard theory of solar neutrino oscillation including the MSW effect and the day-night difference. 1 Introduction Solar neutrinos represent one of the most enduring and exciting fields of research in astrophysics and particle physics. From the astrophysical perspective, they offer a unique tool that enables us to gain direct insights into the solar interior. From the perspective of particle physics, the Sun serves as an intense source of neutrinos, allowing us to explore their fundamental properties and deepen our understanding of the underlying theories that govern these elusive particles. Pioneered by Raymond Davis Jr. in the 1960s, the first solar neutrino experiment observed a deficit of solar neutrinos compared to theoretical predictions, providing the first hint of neutrino oscillation, which, among various other explanations for the deficit at that time, was ultimately confirmed as the only resolution of the solar neutrino problem by subsequent experiments. The confirmation of neutrino oscillation has a profound impact on particle physics, as it implies that neutrinos—contrary to predictions made by the Standard Model (SM)—have nonzero masses. This discovery opens new avenues for exploring new physics beyond the SM. Thus, the study of solar neutrinos stands as one of the most successful examples demonstrating how new observations into the sky can lead to groundbreaking discoveries in fundamental physics. In this article, we aim at presenting a pedagogical introduction to solar neutrino physics. In Secs. 2 and 3, we will guide readers through a few questions including how neutrinos are produced from the Sun, how they are detected, why there was the deficit and how it was resolved by neutrino oscillation. We further present a brief yet self-contained formalism to facilitate quick understanding of the calculation of solar neutrino oscillation (Sec. 4), and an overview of experimental achievements and progress (Sec. 5). We also briefly comment on open issues and new physics beyond the established framework. For a more comprehensive review, we refer to Ref. [1]. 2 Standard solar model and solar neutrinos The Sun is a powerful source of neutrinos which are abundantly produced by the thermonuclear reactions inside. The production of solar neutrinos is predicted by the Standard Solar Model (SSM), a theoretical framework crucial for understanding the structure and behavior of the Sun. Table 1 lists a few well-known quantities of the solar profile. Among them, some are determined by observations (e.g. luminosity, radius, mass, surface temperature) while others are predictions of the SSM, including solar neutrino fluxes. | Parameter (observed) | Value | Parameter (predicted) | Value | |---------------------|---------------|-----------------------|---------------| | Luminosity | $3.828 \times 10^{26}$ W | Central temperature | $1.54 \times 10^7$ K | | Radius | $6.961 \times 10^5$ km | Central density | $149$ g cm$^{-3}$ | | Mass | $1.988 \times 10^{30}$ kg | Central pressure | $2.3 \times 10^{16}$ Pa | | Surface temperature | $5.78 \times 10^3$ K | Neutrino fluxes | see Tab. 2 | The SSM is constructed upon principles of hydrostatic equilibrium, energy transport mechanisms, and the nuclear reactions that power the Sun’s energy output. By integrating these principles with observational constraints such as solar radius, luminosity, age, elemental composition, and radiative opacity, detailed predictions can be made about the internal solar structure, including density, temperature, pressure, and neutrino fluxes. Including modern knowledge of neutrinos, the SSM is rather successful in predicting and correlating various observables of the Sun. Historically, the SSM had a long-standing problem: the observed solar neutrino flux was significantly lower than the prediction. This problem, known as the *solar neutrino missing problem*, was eventually resolved by neutrino oscillations. In what follows, we will explain how solar neutrinos are produced in the Sun. 2.1 Thermonuclear reactions in the Sun The solar core is mainly made up of hydrogen (about 74% by mass), helium (about 24%), and small amounts (less than 2%) of heavier elements like oxygen, carbon, neon, and iron. At the core of the Sun, the density reaches around 150 grams per cubic centimeter, with temperatures soaring to about 15 million Kelvin. These extreme conditions allow the penetration of the Coulomb barrier between ions through the quantum tunneling effect, enabling thermonuclear reactions that convert hydrogen into helium through the proton-proton (pp) chain and the carbon-nitrogen-oxygen (CNO) cycle. In the Sun, the pp chain is responsible for 99% of the total solar energy production, while the remaining $\sim 1\%$ is produced by the CNO cycle\footnote{Despite its sub-dominance in solar energy production, the CNO cycle plays a more significant role in massive stars. For stars with masses above 1.3 times the solar mass, the CNO cycle dominates the energy production.}. As is shown in Fig. 1, the pp chain starts with proton-proton fusion ($p + p \rightarrow ^2\text{H} + e^+ + \nu_e$) or, at a much lower rate, proton-electron-proton fusion ($p + e^- + p \rightarrow ^2\text{H} + \nu_e$), which is possible due to the Coulomb potential of protons capturing an electron. Both processes produce electron neutrinos ($\nu_e$), but $\nu_e$ produced from the latter is monochromatic and more energetic. After the initial step of fusion, subsequent nuclear reactions proceed, eventually ending up with four possible sub-chains, denoted by pp-I to pp-IV in Fig. 1. Except for pp-I, each of these sub-chains contains a reaction that can produce neutrinos. Solar neutrinos produced from these reactions in the pp chain are usually referred to as pp, pep, hep, $^7\text{Be}$, and $^8\text{B}$ neutrinos. The CNO cycle, as the name suggests, involves carbon, nitrogen, and oxygen participating a cycle of nuclear reactions—see the right panel of Fig. 1. Strictly speaking, it is not just one completely closed cycle. Instead, it contains multiple cycles coupled together, allowing some of the nuclear elements to leave the dominant cycle and join another cycle involving heavier elements. For instance, when $^{15}\text{N}$ is produced by Cycle-I in Fig. 1 via $^{15}\text{O} \rightarrow ^{15}\text{N} + e^+ + \nu_e$, and then captures a proton, 99% will be converted to $^{12}\text{C} + ^4\text{He}$ while around 1% will be converted to $^{16}\text{O} + \gamma$. So $^{15}\text{N}$ at this stage has a small probability of joining Cycle-II and join Cycle-III. Nevertheless, those cycles involving heavier nuclear elements are less important to the production of solar energy and neutrinos. If we only consider Cycle-I and Cycle-II, there are three reactions producing neutrinos, from $^{13}\text{N} \rightarrow ^{13}\text{C} + e^+ + \nu_e$, $^{15}\text{O} \rightarrow ^{15}\text{N} + e^+ + \nu_e$, and $^{17}\text{F} \rightarrow ^{17}\text{O} + e^+ + \nu_e$. The corresponding neutrinos are referred to as $^{13}\text{N}$, $^{15}\text{O}$, and $^{17}\text{F}$ neutrinos, respectively. Note that in the pp chain, no stable elements heavier than $^4\text{He}$ can be produced. Although some elements such as $^7\text{Li}$, $^7\text{Be}$, and $^8\text{B}$ are produced, they decay rapidly, implying that the pp chain burns hydrogen only into helium. In the CNO cycle, heavy elements like carbon and nitrogen remain almost unchanged after a complete cycle of reactions, implying that they participate in the reactions as catalysts. The net effect of the CNO cycle is also to burn hydrogen into helium. Since both series of reactions only produce $^4\text{He}$, the abundance of elements heavier than $^4\text{He}$—known as the metallicity (in astrophysics, “metals” refers to elements heavier than $^4\text{He}$)—remains rather stable in the Sun. These “metals” not only play the role of catalysts in thermonuclear reactions, but also affect the opacity of the Sun. Therefore, the metallicity of the Sun is one of the key parameters of the SSM. ### 2.2 Solar neutrino fluxes and spectra By evaluating the thermonuclear reaction rates in the Sun, one can calculate solar neutrino fluxes. Starting from John N. Bahcall’s pioneering work [2], the calculation of solar neutrino fluxes has been continuously revised and improved, not only because the quality and quantity of input data for the SSM have increased, but also due to the increasing computational power that allows for more sophisticated simulations (e.g., switching from 1D to 3D, using non-local thermodynamic equilibrium, etc.) to be incorporated into the SSM. Table 2 presents solar neutrino fluxes obtained by the Bahcall-Serenelli-Basu (BSB) calculation [3] and the Barcelona-2016 (B16) calculation [4] based on different SSM data sets including data sets named G598, AGS05, AGSS09. These data sets are named by the initials of the authors conducting the calculations and the years of the publications. In general, solar neutrino fluxes computed by different groups based on different SSM data sets differ from each other. The differences for pp are small, only at the percent level or less, as can be seen from Tab 2. But for $^8\text{B}$ and CNO neutrinos, the differences are significantly larger. Table 2 Calculated solar neutrino fluxes at the Earth. | Flux [cm$^{-2}$s$^{-1}$] | BSB05-GS98 | BSB05-AGS05 | B16-GS98 | B16-AGSS09 | |--------------------------|------------|-------------|----------|------------| | $\Phi_{\text{pp}}/10^{10}$ | $5.99(1 \pm 0.009)$ | $6.06(1 \pm 0.007)$ | $5.98(1 \pm 0.006)$ | $6.03(1 \pm 0.005)$ | | $\Phi_{\text{pep}}/10^8$ | $1.42(1 \pm 0.015)$ | $1.45(1 \pm 0.011)$ | $1.44(1 \pm 0.01)$ | $1.46(1 \pm 0.009)$ | | $\Phi_{\text{hep}}/10^3$ | $7.93(1 \pm 0.155)$ | $8.25(1 \pm 0.155)$ | $7.98(1 \pm 0.30)$ | $8.25(1 \pm 0.30)$ | | $\Phi_{\text{Be}}/10^9$ | $4.84(1 \pm 0.105)$ | $4.34(1 \pm 0.093)$ | $4.93(1 \pm 0.06)$ | $4.50(1 \pm 0.06)$ | | $\Phi_{\text{B}}/10^6$ | $5.69(1^{+0.173}_{-0.147})$ | $4.51(1^{+0.127}_{-0.113})$ | $5.46(1 \pm 0.12)$ | $4.50(1 \pm 0.12)$ | | $\Phi_{\text{N}}/10^8$ | $3.05(1^{+0.366}_{-0.266})$ | $2.00(1^{+0.14}_{-0.12})$ | $2.78(1 \pm 0.15)$ | $2.04(1 \pm 0.14)$ | | $\Phi_{\text{O}}/10^8$ | $2.31(1^{+0.374}_{-0.272})$ | $1.44(1^{+0.165}_{-0.142})$ | $2.05(1 \pm 0.17)$ | $1.44(1 \pm 0.16)$ | | $\Phi_{\text{F}}/10^6$ | $5.83(1^{+0.724}_{-0.420})$ | $3.25(1^{+0.166}_{-0.142})$ | $5.29(1 \pm 0.20)$ | $3.26(1 \pm 0.18)$ | Fig. 2 Left: The energy spectra of solar neutrino fluxes. Note that monochromatic spectra are in units of cm$^{-2}$s$^{-1}$. Right: The production rates of solar neutrinos as a function of the radius $r$. The most important factor behind these differences is solar metallicity. Currently, there are two competing classes of solar models: high-metallicity and low-metallicity models. High-metallicity models (such as GS98) predict higher $^8$B and CNO neutrino fluxes than low-metallicity models (such as AGS05 and AGSS09), not only because the abundance of the catalysts mentioned above is higher but also due to higher radiative opacity caused by the heavy elements. The opacity inhibits heat transfer via radiation and thus increases the core temperature, which in turn raises the nuclear reaction rates and the corresponding neutrino fluxes. The two classes of models have their respective problems, so which one can more accurately predict the solar neutrino fluxes is still unresolved. Recent calculations of solar neutrino fluxes usually consider both of them. Generally speaking, high-metallicity models are in better agreement with helioseismological observations\footnote{Helioseismology studies the interior of the Sun based on vibrations of the solar surface, similar to seismology for the Earth.}, while low-metallicity models are favored by more advanced simulations but are in tension with helioseismological data. This unresolved issue is known as the solar metallicity problem. We do not intend to expand further in this pedagogical article and refer interested readers to recent reviews [1, 5]. The shapes of solar neutrino energy spectra are mainly determined by the kinematics of corresponding reaction processes and are almost independent of solar models. Unlike the total fluxes, which are affected by the uncertainties of solar models, the spectral shapes are invulnerable to potential variations in the core profile (e.g. variations in temperature and density). This is because the energy released by a nuclear reaction is much higher than the kinetic energy of the initial-state particles in the reaction. The former is typically above MeV, while the latter is around the core temperature ($\sim$keV). Therefore, the spectral shapes in Bahcall’s calculation [6] are still widely used in modern solar neutrino calculations. Figure 2 (left panel) shows the solar neutrino energy spectra obtained using Bahcall’s spectral shapes and the total flux data of B16-GS98 in Tab. 2. Also shown is the radial distribution of neutrino production in the Sun as a function of the radial distance $r$ divided by the solar radius $R_\odot$. As is shown in Fig. 2, some of the solar neutrino energy spectra (such as pep, $^7$Be, etc.) are monochromatic. This is because the final states of these reactions only contain two particles. When the total energy of the initial states is fixed (which is approximately the case as all initial particles are non-relativistic), the two-body kinematics dedicates that the neutrinos are monochromatic. For $^7$Be neutrinos produced from $^7$Be + $e^- \rightarrow ^7$Li + $\nu_e$, the $^7$Li nucleus may be in the ground state or an excited state, causing two monochromatic lines at 0.861 MeV (with a branching ratio of 90%) and 0.383 MeV (10%). In addition to the reactions in the pp chain, some reactions in the CNO cycle are accompanied by electron-capture processes, which also produce monochromatic neutrinos. For example, $^{15}$N $\rightarrow ^{15}$C + $e^-$ + $\nu_e$ implies that $^{13}$N + $e^- \rightarrow ^{13}$C + $\nu_e$ is also possible, though at a much lower reaction rate. These fluxes are denoted by $e^{13}$N, $e^{15}$O, and $e^{17}$F in Fig. 2. 3 Solar neutrino detection: principles and methodologies The first solar neutrino experiment was conducted by Davis at Homestake in the 1960s, using the radiochemical method to detect solar neutrinos. Since then, many experiments have been carried out based on diverse detection technologies, including radiochemical, Cherenkov, and liquid-scintillator detectors. In the design of a solar neutrino experiment, two crucial factors must be considered. First, the selection of experimental sites is critical. Solar neutrinos are at the same energy scale as various radioactive decays of unstable nuclear isotopes, which could be continuously produced by cosmic rays interacting with the detector if it is not well shielded. These radioactive decays of cosmogenic unstable isotopes could mimic solar neutrino events, creating a troublesome background for detection. To tackle this problem, solar neutrino experiments are usually conducted deep underground to minimize such interference. Second, the choice of reaction processes and detection techniques is also important. Neutrino capture processes may benefit from low reaction thresholds, but so far, have only been successfully applied to radiochemical detectors, which can not provide real-time measurements. Neutrino-electron scattering is currently the most important detection process for Cherenkov detectors. These detectors feature real-time measurements but need relatively high detection thresholds due to fewer Cherenkov photons emitted when the electron energy decreases. Liquid-scintillator detectors can have lower thresholds but usually lose directionality. 3.1 Radiochemical experiments Radiochemical experiments are based on neutrino-capture processes: \( \nu_e + \frac{A}{Z}X \rightarrow \frac{A}{Z+1}Y + e^- \), where a solar neutrino \( \nu_e \) is captured by a nucleus \( \frac{A}{Z}X \) (here \( A \) and \( Z \) denote the numbers of nucleons and protons in the nucleus \( X \)), resulting in the formation of a daughter nucleus \( \frac{A}{Z+1}Y \) and the emission of an electron. To identify the process in the experiment, the daughter nuclide needs to be radioactive, which means it is unstable and decays with a sufficiently long lifetime. After the target material being exposed to solar neutrinos for a certain period, the produced daughter isotope is extracted via chemical methods. Then the number of these radioactive particles can be counted using proportional counters when they decay. Therefore, the lifetime of the daughter isotope needs to be reasonably long such that it does not decay instantly during the exposure step, and meanwhile can decay effectively during the second step after being chemically extracted. Note that such experiments cannot perform real-time measurements due to the two-step procedure mentioned above. They merely count rates of the neutrino-capture reactions over a designed period. The first solar neutrino experiment, Davis’s Homestake experiment, is based on this detection principle. The experiment utilized 615 tons of liquid perchloroethylene \( C_2Cl_4 \) which both \( ^{35}Cl \) (76%) and \( ^{37}Cl \) (24%) atoms but only the latter is responsible for neutrino capture: \( \nu_e + ^{37}Cl \rightarrow ^{37}Ar + e^- \). This reaction has an energy threshold of 814 keV. The daughter nucleus \( ^{37}Ar \) has a half-life of 35 days. As depicted in Fig. 2, this experiment ought to be capable of detecting the line-spectrum neutrinos from both the pep reaction and the high energy \( ^7Be \) decay, as well as the continuous-spectrum neutrinos from \( ^8B \) decay and the hep reaction. However, since the \( ^{37}Cl \) ground state prefers to transit to the \( ^{37}Ar \) excited states at an energy level of about 5 MeV above it, the Homestake experiment only detected the last two solar neutrinos, among which the \( ^8B \) neutrinos were predominant. Another neutrino capture process, \( \nu_e + ^{71}Ga \rightarrow ^{71}Ge + e^- \), has also been utilized by radiochemical experiments such as GALLEX/GNO and SAGE. The energy threshold of this process is 233 keV, lower than the end-point of 420 keV for the pp reaction, enabling them to access solar neutrinos from all sources. The daughter nucleus \( ^{71}Ge \) has a half-life of 11.4 days, also reasonably long for chemical extraction. In addition, other possible nuclear isotopes have also received attention, including \( ^{98}Mo \), \( ^{203}Tl \), \( ^{7}Li \), \( ^{81}Br \), and \( ^{127}I \). The chemical or geochemical experiments based on \( ^{98}Mo \) and \( ^{203}Tl \) are sensitive to solar neutrino fluxes averaged over millions of years. If the SSM is correct, the average fluxes should be the same as the contemporary ones, providing a test to the SSM. For those based on \( ^{7}Li \), \( ^{81}Br \), and \( ^{127}I \), they are also similar to the detector based on \( ^{37}Cl \) but with different energy thresholds, which are 862, 470, and 789 keV, respectively. It is worth mentioning that experiments based on the neutrino capture reactions can provide direct measurements of the neutrino energy spectrum if the energy of the final-state electron \( E_e \) can be measured. The neutrino energy can be determined via simple kinematics: \( E_\nu = M_p - M_d + E_e \), where \( M_p \) and \( M_d \) are the masses of parent and daughter nuclei, respectively. If the daughter nucleus can provide a detectable delay signal correlating with the prompt electron signal in space and time, then they can be detected without traditional radiochemical means and the background can also be significantly suppressed. A few isotopes including \( ^{176}Yb \), \( ^{160}Gd \), and \( ^{82}Se \) have been proposed for solar neutrino detection with real-time signatures for discriminating solar neutrino signals from the potential radioactive background [7]. 3.2 Cherenkov experiments Cherenkov detectors for solar neutrino detection use water (\( H_2O \)) or heavy water (\( D_2O \)) as the target material. In water-based Cherenkov detectors (such as Kamiokande and Super-Kamokande), solar neutrinos are detected via elastic electron scattering, \( \nu_\alpha + e^- \rightarrow \nu_\alpha + e^- \), where the flavor index \( \alpha \) can be any of \( e \), \( \mu \), and \( \tau \). The final-state \( e^- \) emits Cherenkov light in the detector if its velocity is higher than the speed of light in the medium. This makes both the angle and energy of the electron detectable. It is noteworthy that the cross section of elastic \( \nu_e + e^- \) scattering is significantly larger than that of \( \nu_\mu + e^- \) or \( \nu_\tau + e^- \). Due to the lightness of the electron, the angle between the final-state electron and incoming neutrino is generally small (for instance, a 6 MeV electron generated by a 10 MeV neutrino has the angle \( \approx 14^\circ \)), implying that the motion of the recoil electron is rather forward. This characteristic is exploited to determine the direction of incoming solar neutrinos (see Fig. 3), effectively differentiating them from background sources such as environmental radiation and cosmogenic background. One of the advantages of water-based Cherenkov detectors is that they can be easily scaled up with real-time measurements and comparatively low cost on the target material (pure water). Therefore, water-based Cherenkov detectors typically have very high fiducial masses, ranging from kiloton to sub-megaton scales. Heavy-water-based Cherenkov detectors can detect neutrinos via \( \nu_e + D_2O \rightarrow 2p + e^- \) and \( \nu_\alpha + D_2O \rightarrow \nu_\alpha + p + n \), in addition to elastic neutrino-electron scattering. The neutral-current processes \( \nu_\alpha + D_2O \rightarrow \nu_\alpha + p + n \) render heavy-water-based experiments more sensitive to all neutrino flavors. Historically, it played a crucial role in resolving the solar neutrino problem, as solar neutrinos that have changed flavors can be equally detected by this process. Since water (and heavy water) contains a large number of oxygen nuclei, the interactions between atmospheric neutrinos and oxygen nuclei, via \( \nu(\bar{\nu}) + ^{16}O \rightarrow \nu(\bar{\nu}) + n + ^{15}O^* \) or \( \nu(\bar{\nu}) + ^{16}O \rightarrow \nu(\bar{\nu}) + p + ^{15}N^* \), engender an unavoidable background for studying solar neutrinos. The subsequent de-excitation of the produced causes a gamma-ray background, which is indistinguishable from electron signals in Cherenkov detectors. Since this background is typically accompanied by the production of neutrons, neutron-tagging is important to the reduction of this background. The Super-Kamiokande experiment has developed a technique to dissolve gadolinium chemical compounds in water, notably enhancing neutron detection efficiency. This advancement will potentially improve the rare signal search for hep neutrinos and solar antineutrinos. ### 3.3 Liquid scintillator experiments Liquid scintillator (LS) experiments also employ elastic neutrino-electron scattering as the primary detection process for solar neutrinos, but have much improved energy resolution compared to water- or heavy-water-based Cherenkov experiments. In LS, neutrino-electron scattering is detected via the ionization energy deposited by the recoil electron. Since the electron deposits most of its kinetic energy in the form of ionization instead of Cherenkov radiation, the light yield in LS is much higher than that in water. LS is usually composed of organic solvent doped with fluorophores. The ionization caused by a charged particle excites the aromatic solvent molecules, which then transfer their excited energy to fluorophore molecules. Due to a phenomenon called the Stokes shift, the fluorophore emits photons with significantly larger wavelengths than the solvent’s photon spectrum when it undergoes de-excitation, falling into the sensitive range for photon sensors or phototubes. As a result, LS can precisely measure the electron kinetic energy deposited in ionization by converting it to optical photons. This enhanced capability is particularly important for detecting solar neutrinos, which have energies on the MeV scale. As a unpolarized medium, LS is easier to purify than water, making it more suitable for detecting pp, pep, \( ^7\text{Be} \), and CNO neutrinos. The Borexino experiment is a successful example of utilizing LS to detect solar neutrinos including these low-energy components. LS is relatively cost-effective and can be scaled up as real-time detectors. Recently, techniques for directional reconstruction utilizing the faint Cherenkov light emitted by electrons in liquid scintillators have been developed. One method involves extending the fluorescent decay time to give prominence to the Cherenkov light. This method has been successfully demonstrated in the SNO+ experiment. The other method utilizes a correlated and integrated directionality approach. By analyzing the detected phototube hit pattern in relation to the known position of the Sun and integrating it over large statistics of events, it is possible to generate a distribution of the angle between the hit phototubes and the interaction position of a solar neutrino with the matter. Electrons scattered by solar neutrinos create a distinct signature in the angular distribution compared to the isotropic radioactive background unrelated to the Sun. This signature allows for the separation and measurement of the solar neutrino signal. This technique has been effectively employed in the Borexino experiment, significantly enhancing its sensitivity in measuring the pep, \( ^7\text{Be} \), and CNO fluxes, especially in discovering CNO neutrinos. 3.4 The solar neutrino problem The first measurement of solar neutrinos came out in 1968 [8] from the Homestake experiment led by Davis. The result was that the solar neutrino flux was less than 3 SNU (Solar Neutrino Unit. One SNU corresponds to one capture per second per $10^{36}$ target nuclei). This result was significantly lower than the theoretical values calculated by John Bahcall [2] (the theoretical values are 21, 11, 7.7, 4.4, and 11 SNU for five models considered by Bahcall in this paper). Further data taking at the Homestake experiment led to a more precise measurement: $2.56 \pm 0.16 \pm 0.16$ SNU [9], which was only one-third of Bahcall’s refined prediction [10]. Other neutrino experiments based on $^{71}$Ga and water targets also found deficits, though they varied from half to two-thirds. The discrepancy is known as the solar neutrino problem [11]. For a decade or more after the Homestake experiment published the first result, the most common explanation for the problem was that something was incorrect with the solar model. Theories endeavored to amend solar models by incorporating additional effects into the Sun, such as magnetic fields, rapid rotation, and atypical metal abundances. However, as helioseismology was established, it became evident that there was little wrong with the solar model. Thus, the solution to the solar neutrino problem must lie in some new physics of elementary particles. Although nowadays we know the fundamental cause of the solar neutrino problem is neutrino oscillation, it is nevertheless worth mentioning a few historical attempts to address this problem. In an article titled “What cooks with solar neutrinos?” [12], Fowler proposed two explanations involving experimental nuclear physics, and changing theoretical solar structure and evolution. Cisneros followed a discussion with Wentz and investigated an extreme assumption that a neutrino could alter its helicity state when passing through the strong magnetic field inside the Sun and change to an antineutrino [13]. Bahcall, Cabibbo, and Yahil raised the question of whether neutrinos were stable particles and discussed the consequence of neutrinos having finite masses [14]. Freedman et al. suggested using a mass-spectrometric assay of the induced tiny concentration of $1.6 \times 10^{-7} \text{^{205}Pb}$ in old thallium minerals to examine the dominant low-energy component in solar neutrino flux [15]. Bahcall et al. proposed to use $^{71}$Ga to trap lower-energy solar neutrinos [16]. Haxton and Cowan proposed studying long-lived isotopes produced by solar neutrinos in the Earth’s crust to probe secular variations in the rate of energy production in the sun’s core [17]. Faulkner and Gilliland hypothesized an idea, assuming that a small mass fraction of weakly interacting massive particles (WIMPs) lurked in the core of the sun and served as very efficient energy conductors, which could change the core temperature and affect solar neutrino production [18]. Eventually, neutrino oscillation turned out to be right answer to the solar neutrino problem. The theory of neutrino oscillation will be introduced in the next section. 4 Solar neutrino oscillation The idea of neutrino oscillation was first proposed by Pontecorvo in the 1950s with the original consideration on $\nu \leftrightarrow \bar{\nu}$ oscillation and later developed to incorporate flavor mixing and the matter effect. The modern theory of neutrino oscillation is formulated for the purpose of computing flavor transitions among three neutrino flavors ($\nu_e$, $\nu_\mu$, $\nu_\tau$), applicable to neutrino oscillation in various circumstances including the Sun. In this section, we briefly review the modern theory of neutrino oscillation and a few well-known phenomena in solar neutrino oscillation. 4.1 The general formalism for neutrino oscillation If neutrinos have masses, their mass eigenstates ($\nu_i$ with $i = 1, 2, 3, \cdots$) are not necessarily in alignment with their flavor eigenstates ($\nu_\alpha$ with $\alpha = e, \mu, \tau, \cdots$) participating in weak interactions. The two sets of eigenstates may differ by a unitary transformation, $\nu_\alpha = \sum_i U_{\alpha i} \nu_i$, where $U$ is the so-called PMNS matrix. Neutrino oscillation refers to the phenomenon that neutrinos in a specific flavor eigenstate, which is a quantum superposition of mass eigenstates, may change to another flavor eigenstate during propagation, due to the different dispersion relations of mass eigenstates. Quantitatively, the evolution of neutrino flavors in the three-neutrino framework of neutrino oscillation is governed by the following equation: $$i \frac{d}{dL} \begin{pmatrix} \nu_e \\ \nu_\mu \\ \nu_\tau \end{pmatrix} = \left[ \frac{1}{2E_\nu} U \begin{pmatrix} m_1^2 & & \\ & m_2^2 & \\ & & m_3^2 \end{pmatrix} U^\dagger + \begin{pmatrix} V_e & 0 & \\ & 0 & \\ & & 0 \end{pmatrix} \right] \begin{pmatrix} \nu_e \\ \nu_\mu \\ \nu_\tau \end{pmatrix}, \tag{1}$$ where $L$ denotes the propagation distance, $E_\nu$ is the neutrino energy, $m_{1,2,3}$ are the masses of $\nu_{1,2,3}$, and $V_e = \sqrt{2} G_F n_e$ is an effective potential with $G_F$ the Fermi constant and $n_e$ the electron number density of the medium. The effective potential $V_e$ accounts for the matter effect, also known as the Mikheyev-Smirnov-Wolfenstein (MSW) effect [19–21], on neutrino oscillation. It is caused by coherent forward scattering of neutrinos with medium particles. In principle, both electrons and nuclei contribute to the effective potential but the contribution of the latter is flavor independent because it is caused by flavor-blind neutral-current interactions. Such a flavor-independent contribution does not affect oscillation and can be neglected. 4.2 The survival probability of solar electron neutrinos Eq. (1) can be straightforwardly applied to solar neutrino oscillation. Both the solar and terrestrial matter effects can be readily taken into account by including $L$-dependent contributions to $V_e$. In practice, we are mainly concerned with the survival probability, $P_{ee} \equiv \langle \nu_e(L) | \nu_e(0) \rangle$. \[ |\langle \nu_e(L) | \nu_e(0) \rangle|^2 \], which is the probability of an electron neutrino produced at the source (denoted by \(|\nu_e(0)\rangle\)) after traveling through the distance \(L\) still retaining the original flavor (i.e. in the state \(|\nu_e(L)\rangle\)). Under certain approximations, one can compute \(P_{ee}\) analytically without numerically solving Eq. (1). Assuming that the evolution is adiabatic (which means \(V_e\) varies sufficiently slowly in the Sun compared to the oscillation wavelength) and \(\nu_3\) is not involved in the oscillation (\(\nu_e\) mainly consists of \(\nu_1\) and \(\nu_2\)), \(P_{ee}\) is given by \[ P_{ee} \approx \frac{1}{2} + \frac{1}{2} \cos 2\theta^m_{12} \cos 2\theta_{12}, \] with \[ \cos 2\theta^m_{12} \approx \frac{\cos 2\theta_{12} - \beta_{12}}{\sqrt{(\cos 2\theta_{12} - \beta_{12})^2 + \sin^2 2\theta_{12}}} , \quad \beta_{12} \equiv \frac{2V^0_e E_\nu}{\Delta m^2_{21}}, \] where \(\theta_{12}\) is an angle quantifying the composition of \(\nu_e\) (\(\nu_e \approx \cos \theta_{12} \nu_1 + \sin \theta_{12} \nu_2\)), \(\Delta m^2_{21} \equiv m^2_2 - m^2_1\), and \(V^0_e\) denotes the value of \(V_e\) at the solar center. Taking specific values of \(V^0_e\) and \(\Delta m^2_{21}\), \(\beta_{12} \approx E_\nu/(3 \text{ MeV})\). Note that \(P_{ee}\) is energy dependent. Figure 4 shows the variation of \(P_{ee}\) as a function of \(E_\nu\). There are two interesting limits of the survival probability. At low energies (\(E_\nu \ll 3 \text{ MeV}\)), \(\beta_{12}\) in Eq. (3) can be neglected, leading to \(\theta^m_{12} \to \theta_{12}\) and \(P_{ee} \approx (1 + \cos^2 2\theta_{12})/2 \approx 0.57\). This is known as the vacuum limit since \(P_{ee}\) in this limit is almost unaffected by the MSW effect. At high energies (\(E_\nu \gg 3 \text{ MeV}\)), Eq. (3) gives \(\cos 2\theta^m_{12} \approx -1\) and \(\theta^m_{12} \approx 90^\circ\), implying that the neutrino coming out of the Sun would be almost purely \(\nu_2\). In this high-energy limit, the survival probability is given by \(P_{ee} \approx (1 - \cos 2\theta_{12})/2 \approx \sin^2 \theta_{12} \approx 0.3\), corresponding to the gray dashed curve in Fig. 4. The transition from the high-energy to the low-energy limits, occurring at around a few MeV, is referred to as the *upturn* in the literature. When solar neutrinos arrive at a detector on the Earth at night, the neutrinos have also traversed a significant length of terrestrial matter. This causes the survival probability in the high-energy limit to be slightly higher than that without the earth matter effect. Hence it is fair to say that “the Sun at night is brighter than that in the day” if the brightness refers to the luminosity of \(\nu_e\). More specifically, the day-night difference of \(P_{ee}\) can be estimated by \[ \delta P_{ee} \equiv P_{ee}^{(\text{day})} - P_{ee}^{(\text{night})} \approx \frac{1}{2} \frac{\cos 2\theta^m_{12} \sin^2 2\theta_{12} \beta_\oplus}{\beta^2_\oplus - 2\beta^2_\oplus \cos 2\theta_{12} + 1}, \] where \(\beta_\oplus\) is similar to \(\beta_{12}\) in Eq. (3) except that \(V^0_e\) is replaced by the average value of \(V_e\) in the Earth. In Fig. 4, the small difference between the orange dashed and blue solid lines represents this correction due to the Earth matter effect. 5 Experimental achievements and progress Since the first solar neutrino experiment, many experiments have made important progress, advancing our understanding of the Sun and neutrinos. In this section, we briefly review the achievements of measuring solar neutrino fluxes. 5.1 Measurements of solar neutrino fluxes According to the standard solar model, solar neutrinos contain $^8$B, $^7$Be, pep, pp, CNO, and hep neutrino components. Almost all these components have been observed except for the hep neutrinos. Below we briefly review the measurement of each component. 5.1.1 $^8$B neutrinos Solar $^8$B neutrinos are produced from the decay of $^8$B, which is a product of the pp-chain reactions, as is shown Fig. 1. The maximum energy of $^8$B neutrinos is around 15 MeV. From Fig. 2 one can see that $^8$B neutrinos have the highest flux in the energy range from 3 to 15 MeV, which makes $^8$B neutrinos the most successfully measured component. Currently, the Super-Kamiokande experiment provides the most precise flux measurement of $^8$B neutrinos using neutrino-electron scattering. Assuming all $^8$B neutrinos are in the $\nu_e$ state, the measure flux is $(2.336 \pm 0.011 \pm 0.043) \times 10^6$ cm$^{-2}$s$^{-1}$, which significantly deviates from the SSM prediction. According to the survival probability in Fig. 4, this deficit is expected from neutrino oscillation. Note that the above measurement is sensitive to neutrino oscillation parameters. It is possible to measure the $^8$B neutrino flux without being affected by neutrino oscillation. This is achieved by the SNO experiment using heavy water as the target material. As mentioned in Sec. 3, heavy water allows the pure NC process $v_\alpha + D_2O \rightarrow \nu_\alpha + p + n$ to be exploited for neutrino detection. This process is flavor-independent so the result is invulnerable to the uncertainties of oscillation parameters. In addition, the CC process $\nu_e + D_2O \rightarrow 2p + e^-$ also offer a $\nu_e$-only measurement of the flux. From 1999 to 2006, the SNO experiment underwent three phases, using distinct neutron detection methods to improve the flux measurement via the NC process and the $\nu_e$ flux measurement via the CC process. A combination of all three phases of data gives the final result $(5.25 \pm 0.16^{+0.11}_{-0.13}) \times 10^6$ cm$^{-2}$s$^{-1}$, with an uncertainty of around 3.8%. JUNO, an upcoming 20-kiloton liquid scintillator experiment, will explore the feasibility of using isotope $^{13}$C with a natural abundance of 1.1% to detect solar neutrinos, and may provide another model-independent flux measurement. 5.1.2 $^7$Be neutrinos $^7$Be neutrinos are produced via the electron capture process $^7$Be + $e^- \rightarrow ^7$Li + $\nu_e$. The energy spectrum contains two monochromatic lines, one at 0.861 MeV (90%) and the other at 0.383 MeV (10%). The experimental study of sub-MeV solar neutrinos is challenging. It requires both a clean environment with extremely low radioactivity and an excellent detector with high energy resolution. The Borexino experiment was designed to detect the 0.862 MeV line in a real-time detector with a low natural radioactive background and high energy resolution. So far, only the Borexino experiment has successfully measured the $^7$Be flux. The result is $\Phi(^7\text{Be}) = (4.99 \pm 0.11^{+0.06}_{-0.08}) \times 10^6$ cm$^{-2}$s$^{-1}$, in agreement with the expected value given in Table 2. The Borexino experiment employed neutrino-electron scattering as the detection process. This actually only allows the electron energy instead of the neutrino energy to be measured. In the sub-MeV energy range, the electron recoil spectrum of $^7$Be neutrinos overlaps with those of other solar neutrino components. So one has to perform a spectral fit to extract the $^7$Be component, assuming the standard oscillation with known oscillation parameters. Note that the energy of $^7$Be neutrinos lies in the transition phase between vacuum oscillation and matter effect—see Fig. 4. Since this part is known to be sensitive to new physics effects, it would be desirable that future solar neutrino experiments can measure the neutrino energy directly (i.e. serving as a neutrino energy spectroscopy). 5.1.3 pp neutrinos The fundamental thermonuclear reaction producing solar energy is the proton-proton reaction $p + p \rightarrow ^2\text{H} + e^+ + \nu_e$, with a maximum neutrino energy of 0.42 MeV. The pp neutrino flux constitutes 91% of the total solar neutrino flux, making it the dominant component. However, it also has the lowest energy, which makes its detection particularly challenging, similar to the situation of $^7$Be neutrinos. The Borexino experiment has, thanks to the excellent performance of its liquid scintillator and the low background, successfully measured the flux of pp neutrinos: $\Phi(\text{pp}) = (6.1 \pm 0.5^{+0.3}_{-0.2}) \times 10^{10}$ cm$^{-2}$s$^{-1}$, which is consistent with the SSM predictions in Table 2. The measurement of pp neutrinos at Borexino is achieved by collecting neutrino-electron scattering data with a very low detection threshold down to 0.165 MeV. Due to the overlapping with the other solar neutrinos, the measurement of the pp-neutrinos is obtained via a spectral fit. So far, the precision of pp neutrino measurement still cannot discriminate the solar models listed in Table 2. 5.1.4 pep neutrinos In proton-proton fusion, there is a small probability (0.24%) that an electron is captured by a proton before the fusion, leading to the pep process: $p + e^- + p \rightarrow ^2\text{H} + \nu_e$. The two-particle final state makes the neutrino energy monoenergetic, with the neutrino energy at 1.44 MeV. This energy lies in the transition phase from vacuum oscillation to matter effect and, therefore, is important for testing various oscillation models. The first evidence of the pep solar neutrinos was directly detected by the Borexino experiment. Even though it has a monoenergetic feature, extracting the pep neutrino component from the electron energy spectrum is complicated since its energy distribution overlaps with the one for the CNO neutrinos. In addition to assuming the Mikheyev-Smirnov-Wolfenstein large mixing angle solution to solar neutrino oscillations, one has to fix the shape of the CNO neutrinos, which depends on whether High-Z SSM or Low-Z SSM metallicity is used for the CNO neutrinos. Consequently, the flux is determined with the two models: $\Phi(\text{pep}) = (1.27 \pm 0.19^{+0.03}_{-0.2}) \times 10^8 \text{cm}^{-2}\text{s}^{-1}$ for High-Z SSM metallicity, $\Phi(\text{pep}) = (1.39 \pm 0.19^{+0.08}_{-0.13}) \times 10^8 \text{cm}^{-2}\text{s}^{-1}$ for Low-Z SSM metallicity, respectively. Both results are consistent with the expected value in Table 2. 5.1.5 **CNO neutrinos** The sub-dominant CNO cycle involves fusion facilitated by the presence of carbon, nitrogen, and oxygen. The rates of these processes are dependent on temperature. The CNO cycle is comprised of two sub-cycles, CN and NO. At the relatively low temperature of the solar core, sub-cycle CN is the primary process, accounting for about 99% and producing neutrinos from the beta decays of $^{15}\text{O}$ and $^{15}\text{N}$. The fusion facilitated by carbon, nitrogen, and oxygen provides valuable information regarding the metallicity of the Sun’s core, specifically, its abundance of elements heavier than helium. The solar metallicity in standard solar models leads to significantly different predictions for the CNO neutrino flux. A precise measurement serves as a crucial test to these models. The measurement was challenging due to the energy lying in the range of a few MeV, where radioactive and cosmogenic backgrounds are dominant. Thanks to the successful development of a technique for correlated and integrated directionality for sub-MeV solar neutrinos, Borexino ultimately discovered the CNO neutrinos and measured the flux to be $\Phi(\text{CNO}) = (6.7^{+1.2}_{-0.8}) \times 10^8 \text{cm}^{-2}\text{s}^{-1}$. This measurement aligns with the expected value, as provided in Table 2. It is consistent with high metallicity standard solar models. When combined with the flux measurements on $^8\text{B}$ and $^7\text{Be}$, the low metallicity SSM is disfavored, offering direct experimental access to the study of the primary mechanism for the conversion of hydrogen into helium in the Universe. 5.1.6 **hep neutrinos** The last and most difficult process through which neutrinos are produced in the Sun is the fusion of protons and helium nuclei: $^3\text{He} + p \rightarrow ^4\text{He} + e^+ + \nu_e$. This process is known as the hep-branch of proton-proton fusion. The neutrinos created by this process are called hep neutrinos. They have the highest energy (18.77 MeV) among all the solar neutrinos. Due to their small quantity and the end-point energy slightly above the $^8\text{B}$ neutrinos, the search for hep neutrinos is somewhat tricky. It is now the only one that solar neutrino experiments have not yet detected. Attempts have been made in both the Super-Kamiokande and the SNO experiments, but no evidence has been found. The current strict limit is from the SNO experiment: $\Phi(\text{hep}) < 2.3 \times 10^8 \text{cm}^{-2}\text{s}^{-1}$ @90% C.L.. Based on a comparison to the expected value provided in Table 2, it is evident that significant improvements are necessary to detect them. Discovering the hep neutrinos would greatly impact astroparticle physics, particularly in our comprehension of stellar evolution and the physics of massive neutrinos, as this process in the Sun’s fusion core drives a much higher neutrino production than any other reaction. 5.2 **Solar neutrino oscillation parameters** Two types of solutions exist for the observed neutrino missing problem. Either the solar neutrino oscillates in vacuum during its journey to the Earth, or it has already been converted within the Sun due to matter effects. In the vacuum oscillation solution, as previously mentioned, considering the observable $^8\text{B}$ neutrino ratio relative to the expected from the SSM, which is about 0.45, one can use the vacuum oscillation formula to estimate the neutrino mass-square difference regions of around $10^{-10} \text{eV}^2$. However, assuming CPT invariant, this value is inconsistent with the observation of the KamLAND experiment using $\bar{\nu}_e$’s from commercial nuclear reactors in a similar energy range but with a shorter distance, about 180 km, so short that the oscillation can be treated in vacuum. From a study of the energy spectrum, the KamLAND experiment yields $\sin^2\theta_{13} = 0.325^{+0.062}_{-0.054}$ and $\Delta m^2_{21} = (7.54^{+0.19}_{-0.18}) \times 10^{-5} \text{eV}^2$. Consequently, the solution using neutrino oscillation in vacuum can be excluded to explain the deficit observed in measuring the high-energy $^8\text{B}$ neutrino flux. The other appealing solution is a conversion in matter via the MSW effect. In this scenario, electron neutrinos scattering off electrons in the high-density Sun interior can cause the almost complete conversion of $\nu_e$’s to $\nu_\mu$’s and $\nu_\tau$’s. The Super-Kamiokande and the SNO experiments use solar neutrino data to provide a combined fit to the three-flavor neutrino oscillation parameters, which are $\sin^2\theta_{12} = 0.305 \pm 0.014$, and $\Delta m^2_{21} = (6.10^{+0.04}_{-0.75}) \times 10^{-5} \text{eV}^2$. These results can be compared with those from the KamLAND experiment. Both results are consistent, though solar experiments have a larger error in the mass-square difference due to the lack of data covering a few MeV regions, where solar neutrinos undergo a transition from vacuum oscillation to matter effect. It is expected that the JUNO experiment will significantly improve the precision using reactor antineutrinos. 5.3 **Terrestrial matter effects** The cleanest and most direct test for terrestrial matter effects on neutrino oscillations lies in the comparison of the daytime and the nighttime solar neutrino interaction rates. In this comparison, the solar zenith angle, defined as the angle between the vector from the solar position to the solar neutrino event position and the vertical detector ($z$) axis, governs the size and length of the terrestrial matter density through which the neutrinos pass and, thereby, the oscillation probability and the observed interaction rate. An increase in the nighttime interaction rates implies a regeneration of electron-flavor neutrinos. A day/night asymmetry parameter is defined as $$A_{D/N} = 2 \frac{\Phi_{\text{day}} - \Phi_{\text{night}}}{\Phi_{\text{day}} + \Phi_{\text{night}}}.$$ The expected asymmetry also depends on the oscillation parameters and the energy range within which the flux is measured. With current oscillation parameters, the day/night asymmetry is anticipated to be at a few percent level in the MeV and above region. In contrast, no asymmetry is expected in the keV region. The SNO experiment employs the $^8$B neutrino events from the ES, CC, and NC channels to measure the day/night asymmetry, consistent with zero. Although the Borexino experiment utilizes the $^7$Be neutrino events to measure the asymmetry, its result is also consistent with zero. By contrast, the Super-Kamiokande experiment uses the large dataset of $^8$B neutrino events to determine the day/night asymmetry parameter, giving $A_{D/N}^{SK} = -0.0286 \pm 0.0085 \pm 0.0032$, which deviates from zero by 3.2σ, providing evidence for the existence of earth matter effects on solar neutrino oscillation, namely the terrestrial matter effects. It is expected that the successor of the Super-Kamiokande experiment, the Hyper-Kamiokande experiment, will be capable of increasing the sensitivity up to 5σ and will eventually discover the terrestrial matter effects. 5.4 Solar activity and seasonal effect Owing to the high-energy characteristic of the $^8$B neutrino events, experiments can offer a real-time clean and highly statistical flux measurement for studying the potential correlation between the annual sunspot activity and the $^8$B neutrino flux. The measurement reveals a constant solar neutrino flux emitted by the Sun, at least for $^8$B neutrinos. It implies that the periodic activities of the Sun, such as the rotation within the Sun or the fluctuation of the sunspot numbers, have no impact on the thermonuclear reactions at the core of the Sun. The seasonal variation of $^8$B flux is also utilized to examine the law of the inverse square of the distance between the Sun and the Earth, and no deviation is observed. 6 Beyond the standard framework When the solar neutrino problem appeared, many hypotheses were proposed to resolve the problem. After tremendous experimental efforts, the standard framework of neutrino oscillation has been established and accepted insofar as the only successful resolution. Nevertheless, new physics beyond the standard framework might lurk in solar neutrinos and could be revealed by precision measurements of solar neutrino fluxes. Below, we briefly mention a few of such possibilities. 6.1 Sterile neutrinos Sterile neutrinos, as suggested by the name, are neutrino-like particles possessing two features: (i) they do not participate in SM gauge interactions, and (ii) they have mass mixing with the SM neutrinos. By contrast, the SM neutrinos in this context are often referred to as active neutrinos. In the presence of light sterile neutrinos, solar neutrino oscillation may contain new oscillation modes, and the corresponding phenomenology has been widely discussed in the literature. In particular, it has been shown that sterile neutrinos could modify the vacuum-matter transition (known as the up-turn) of the survival probability in Fig. 4, causing a significant dip on the curve at a few MeV. Therefore, precision measurements of the solar neutrino spectrum in this energy range are crucial to sterile neutrino searches. 6.2 Non-standard interactions In addition to the weak interactions predicted by the SM, many models proposed for massive neutrinos also suggest the existence of new neutrino interactions. A large collection of them can be parametrized in the framework called Non-Standard Interactions (NSIs). NSIs are four-fermion effective interactions similar to the Fermi interactions but with more general flavor structures. NSIs have two effects on solar neutrinos: (i) they can affect neutrino propagation in the solar medium, and (ii) they may modify the cross section of neutrino scattering at detection. The first effect was already considered by Wolfenstein in his seminal paper on the MSW effect, where it was suggested that even if neutrinos were massless, neutrino oscillation could still be induced by flavor off-diagonal NSIs. The second effect directly alters event rates in detectors. Therefore, when the relevant oscillation parameters are well determined, precision measurements of solar neutrino event rates can be used to set stringent constraints on NSI parameters. 6.3 Other new physics scenarios In addition, many other new physics scenarios, such as neutrino magnetic moments, spin-flavor precession, neutrino decay, light mediators, and WIMPs, could potentially influence solar neutrino observations. Interested readers are referred to Ref. [1] for a more comprehensive review of this subject. 7 Summary and outlook The study of solar neutrinos began with the unexpected deficit in observations known as the solar neutrino problem. This issue prompted numerous hypotheses and tests, ultimately culminating in the groundbreaking discovery of neutrino oscillation. After more than half a century of research, the major solar neutrino fluxes have been accurately measured and are in good agreement with predictions from the standard solar model combined with the interpretation of neutrino oscillation. However, solar neutrino physics is far from approaching the end. The standard solar models still face unresolved challenges, such as the metallicity problem, and there may be new physics lurking within the properties of neutrinos. With the advent of new experiments, solar neutrino research is entering an exciting new era. Precision measurements of solar neutrino energy spectrum, fluxes, and neutrino-mixing parameters will provide deeper insights not only into solar physics itself but also into particle physics. This field continues to serve as a unique probe for new physics. In the foreseeable future, next-generation experiments such as Hyper-Kamiokande, DUNE, and JUNO are anticipated to significantly enhance current measurements and present exciting opportunities for groundbreaking discoveries. As history has demonstrated, new observations of our nearest star may lead to profound advancements in our understanding of the fundamental laws of nature. Acknowledgments This work is supported in part by the National Natural Science Foundation of China under grant No. 12141501 and No. 12127808, and also by the CAS Project for Young Scientists in Basic Research (YSBR-099). References [1] Xun-Jie Xu, Zhe Wang, Shaomin Chen, Solar neutrino physics, Prog. Part. Nucl. Phys. 131 (2023) 104043, doi:10.1016/j.ppnp.2023.104043, 2209.14832. [2] John N. Bahcall, Neta A. Bahcall, G. Shaviv, Present status of the theoretical predictions for the Cl-36 solar neutrino experiment, Phys. Rev. Lett. 20 (1968) 1209–1212, doi:10.1103/PhysRevLett.20.1209. [3] John N. Bahcall, Aldo M. Serenelli, Sarbani Basu, 10,000 standard solar models: a Monte Carlo simulation, Astrophys. J. Suppl. 165 (2006) 400–431, doi:10.1086/504043, astro-ph/0511337. [4] Núria Vinyoles, Aldo M. Serenelli, Francesco L. Villante, Sarbani Basu, Johannes Bergström, M. C. Gonzalez-Garcia, Michele Maltoni, Carlos Peña Garay, Ningqiang Song, A New Generation of Standard Solar Models, Astrophys. J. 835 (2) (2017) 202, doi:10.3847/1538-4357/835/2/202, 1611.09867. [5] Gabriel D. Orebi Gann, Kai Zuber, Daniel Bemmerer, Aldo Serenelli, The Future of Solar Neutrinos, Ann. Rev. Nucl. Part. Sci. 71 (2021) 491–528, doi:10.1146/annurev-nucl-011921-061243, 2107.08613. [6] John N. Bahcall, Neutrino Astrophysics, Cambridge University Press 1989, 1p pp., doi:ISBN0-521-37975-X. [7] R. S. Raghavan, New prospects for real time spectroscopy of low-energy electron neutrinos from the sun, Phys. Rev. Lett. 78 (1997) 3618–3621, doi:10.1103/PhysRevLett.78.3618. [8] Raymond Davis, Jr., Don S. Harmer, Kenneth C. Hoffman, Search for neutrinos from the sun, Phys. Rev. Lett. 20 (1968) 1205–1209, doi:10.1103/PhysRevLett.20.1205. [9] B. T. Cleveland, Timothy Daily, Raymond Davis, Jr., James R. Distel, Kenneth Lande, C. K. Lee, Paul S. Wildenhain, Jack Ullman, Measurement of the solar electron neutrino flux with the Homestake chlorine detector, Astrophys. J. 496 (1998) 505–526, doi:10.1086/305343. [10] John N. Bahcall, Roger K. Ulrich, Solar Models, Neutrino Experiments and Helioseismology, Rev. Mod. Phys. 60 (1988) 297–372, doi:10.1103/RevModPhys.60.297. [11] John N. Bahcall, R. Davis, Solar Neutrinos - a Scientific Puzzle, Science 191 (1976) 264–267, doi:10.1126/science.191.4224.264. [12] William A. Fowler, What Cooks with Solar Neutrinos ?, Nature 238 (1972) 24–26, doi:10.1038/238024a0. [13] Arturo Cisneros, Effect of neutrino magnetic moment on solar neutrino observations, Astrophys. Space Sci. 10 (1971) 87–92, doi:10.1007/BF00654607. [14] John N. Bahcall, N. Cabibbo, A. Yahil, Are neutrinos stable particles?, Phys. Rev. Lett. 28 (1972) 316–318, doi:10.1103/PhysRevLett.28.316. [15] M. S. Freedman, C. M. Stevens, E. P. Horwitz, L. H. Fuchs, J. L. Lerner, L. S. Goodman, W. J. Childs, J. Hessler, Solar neutrinos: Proposal for a new test, Science 193 (1976) 1117–1118. [16] John N. Bahcall, et al., Proposed Solar Neutrino Experiment Using Ga-71, Phys. Rev. Lett. 40 (1978) 1351–1354, doi:10.1103/PhysRevLett.40.1351. [17] W. C. Haxton, G. A. Cowan, Solar Neutrino Production of Long-Lived Isotopes and Secular Variations in the Sun, Science 210 (1980) 897–899, doi:10.1126/science.210.4472.897. [18] John Faulkner, Ronald L. Gilliland, Weakly interacting, massive particles and the solar neutrino flux, Astrophys. J. 299 (1985) 994–1000, doi:10.1086/163766. [19] L. Wolfenstein, Neutrino Oscillations in Matter, Phys.Rev. D17 (1978) 2369–2374, doi:10.1103/PhysRevD.17.2369. [20] S. P. Mikheev, A. Yu. Smirnov, Resonance Amplification of Oscillations in Matter and Spectroscopy of Solar Neutrinos, Sov. J. Nucl. Phys. 42 (1985) 913–917, [Yad. Fiz.42,1441(1985)]. [21] S. P. Mikheev, A. Yu. Smirnov, Resonant amplification of neutrino oscillations in matter and solar neutrino spectroscopy, Nuovo Cim. C9 (1986) 17–26, doi:10.1007/BF02508049.
Is It Time to Draw the Line?: The Impact of Redistricting on Competition in State House Elections DAVID LUBLIN and MICHAEL P. McDONALD Election observers of the House of Representatives have decried the decline of competition for U.S. House seats. Sam Hirsch notes that the number of incumbents reelected by over 20 points in the post-reapportionment election of 2002 was much higher than the average of other recent post-reapportionment elections.\(^1\) Noted political scientists Bernard Grofman and Gary Jacobson agree with this assessment. They show that the number of competitive seats has declined over the past 40 years when measured by the number of seats won by less than either ten or twenty percent. Like Hirsch, they note that congressional competition in 2002 was exceptionally low for a post-reapportionment election. Grofman and Jacobson suggest competition will reach record lows later in the decade if the pattern of declining competition post-redistricting in the 1980s and 1990s is followed.\(^2\) Much of the blame for the decline in congressional competition has been attached to the partisan and incumbent-protection redistricting processes and racial redistricting.\(^3\) The focus on congressional elections is natural due to the importance of the federal legislature but scholars ought to study competition in state legislative elections more closely. Partisan and incumbent protection gerrymandering and racial redistricting also occur during the redrawing of state legislative maps. If these factors explain declining competition in congressional elections, their presence should also be associated with lower levels of competition in state legislative elections. Moreover, redistricting arguably has even greater consequences in state legislative elections. Congressional redistricting occurs on a state-by-state basis so no single redistricting authority controls the national process. But power over redistricting at the state level influences the shape of the entire state legislature. The potential impact of partisan gerrymandering is therefore much greater at the state than the federal level. This paper takes a first cut at examining competition outside of the congressional election arena by exploring the aggregate level of competition in lower chamber state legislative elections in 37 states in 2000 and 2004. While this study is only a first step toward exploring competition in state legislative races, it should be highly useful to illuminate the level of competition in state legislative elections, and see whether theories of the impact of redistricting on electoral competition appear to hold water outside of the congressional arena from which they were abstracted. Before turning to the sta- --- \(^1\) Sam Hirsch, “The United States House of Unrepresentatives: What Went Wrong in the Latest Round of Congressional Redistricting,” *Election Law Journal* 2: 2(2003): 182–4 \(^2\) Bernard Grofman and Gary Jacobson, *Vieth v. Jubelirer*, Brief as *Amici Curiae* in Support of Neither Party, No. 02-1580 (August 2003). \(^3\) The claim that redistricting has reduced competitive elections appears in editorial pages across the political spectrum (e.g., *The Wall Street Journal*. 2004. “No Contest.” Nov. 12, 2004. *The Washington Post*. 2003. “The Partisan Fix” September 14, 2003: B06.) tistical analysis of the level of competition in state legislative elections and explanations for variations among the states, the article briefly explores how partisan gerrymandering and racial redistricting may undermine competition. PARTISAN GERRYMANDERING, RACIAL REDISTRICTING, AND ELECTORAL COMPETITION Much of the blame for the decline in competition in congressional elections has been placed on the highly political process used to conduct redistricting in most states. Partisan gerrymanders, such as those enacted by Democrats in Maryland and Republicans in Pennsylvania, attempt to pack many minority party voters into a few districts and to limit concentrations of minority party voters outside these districts to make it possible for the majority party to win the remaining districts as easily as possible. The ideal partisan gerrymander packs as many minority party voters into as few districts as possible while guaranteeing the majority party a solid, but not overwhelming, majority in the other districts. The majority party must be careful to balance the distribution of voters most efficiently against the desire to assure that their candidate can carry the district solidly even if there is an electoral swing against the party.\(^4\) If the majority party spreads its voters too thinly, overestimates its support, or if there is an electoral swing against it, the plan may not work as intended. For example, Indiana Republicans crafted a plan designed to give their party a majority of the state’s congressional seats throughout the 1980s. The plan worked reasonably well at first. The Indiana congressional delegation went from 6-5 Democratic in 1980 to 6-4 Republican in 1982. However, Indiana Democrats ran strong candidates and gained support over the decade. By 1990, the delegation was 8-2 Democratic—hardly the intent of the Republicans who pushed the plan. The failure of the partisan gerrymander had positive consequences for electoral competition. Indiana saw a number of congressional seats change hands during the 1980s despite the efforts of GOP mapmakers to secure a safe majority of seats for their party. Forty percent of Indiana’s fifty congressional races between 1982 and 1990 were won with less than 60 percent of the vote and 26 percent were won by less than 55 percent.\(^5\) Democrats may miscalculate when they draw redistricting plans, too. Prior to the 2002 elections, Democrats hoped to win seven out of thirteen seats in Georgia’s newly expanded congressional delegation. However, Georgia Republicans increased their support in 2002, winning the governor’s mansion for the first time since Reconstruction. Republicans managed to edge out Democrats in two of the “Democratic” seats. Democrats won only five seats and one of their five winners gained his seat by only a one percent margin over his Republican opponent.\(^6\) Due to failed partisan gerrymanders, like those in Indiana and Georgia, some believe partisan gerrymandering increases electoral competition. Burnham argues that: “... partisan gerrymandering is the best producer of competitive districts”\(^7\) because parties may act sub-optimally in their quest to maximize seats, inadvertently shave their margins too close, and thereby create competitive districts. Burnham offers the unsuccessful Republican gerrymander of New York’s congressional seats in 1961 as an example. However, Burnham notes that “... comparable efforts have been resoundingly successful,”\(^8\) and provides analysis of successful partisan gerrymanders in eight other states. --- \(^4\) Guillermo Owen and Bernard Grofman, “Optimal Partisan Gerrymandering,” *Political Geography Quarterly*, 7: 1(1988) 5–22; Bruce E. Cain, *The Reapportionment Puzzle* (Berkeley, CA: University of California Press, 1984). \(^5\) *Congressional Elections: 1946–1998* (Washington: Congressional Quarterly, 1998): 283, 288, 293, 298, 304. \(^6\) Democrats defeated one Republican incumbent in 2004. See Brian Nutting and H. Amy Stern, eds., *CQ’s Politics in America 2002: The 107th Congress* (Washington: Congressional Quarterly, 2001): 254–81; David Hawkings and Brian Nutting, eds., *CQ’s Politics in America 2004: The 108th Congress* (Washington: Congressional Quarterly, 2003): 265–92. \(^7\) Walter Dean Burnham, “Congressional Representation: Theory and Practice of Drawing the Districts” in *Reapportionment in the 1970s*, ed. Nelson W. Polsby (Berkeley, CA: University of California Press, 1971): 277. \(^8\) *Id.* at 276. Both Democrats and Republicans have often augmented their number of seats through partisan gerrymandering and limited electoral competition in the process. Arizona Republicans corralled Democrats into a single district in the 1980s by linking the Democratic portions of Phoenix and Tucson with Yuma across hundreds of miles of empty desert. Except for one narrow victory in 1982, Democrats failed to win any congressional elections outside of Arizona’s Second District from 1982 through 1990. In all but three of the 25 elections held during this period, the winner’s margin of victory exceeded 20 percent.\(^9\) Maryland Democrats managed to shift the partisan makeup of their state’s congressional delegation from a 4-4 even split in 2000 to 6-2 Democratic in 2002 by manipulating the boundaries of the state’s districts. The new map added many more Democrats to the Second District even as it removed the home of incumbent Republican Rep. Robert Ehrlich, spurring him to run successfully for governor but leaving the district open for a Democrat victory in 2002. Elsewhere, the new map removed favorable Republican territory and added more Democratic bastions to the already strongly Democratic Eighth District in a successful effort to defeat incumbent Republican Connie Morella.\(^10\) Competition in Maryland’s congressional elections was quite weak in 2004. Both new Democratic incumbents had no problem winning reelection. Indeed, no Maryland congressional incumbent won by less than 30 percent of the major-party vote. Competition may be reduced to a minimum when the two parties reach a bipartisan agreement to divide seats to draw a plan designed to provide electoral safety for incumbents of both parties. California and Illinois adopted bipartisan gerrymanders for their congressional districts during the post-2000 Census round of congressional redistricting. As a result, the percentage of California congressional districts won by less than 20 points dropped from 27 percent in 2000 to 6 percent in 2002 and 4 percent in 2004.\(^11\) The share of Illinois congressional races won by less than 20 percent also declined, though less spectacularly, from 25 percent in 2000 to 11 percent in 2002 and 16 percent in 2004. California also adopted incumbent protection gerrymanders for the state legislature.\(^12\) The share of competitive seats dropped, though not by as much as the share of marginal congressional seats. In 2000, 20 percent of the 80 Assembly seats were won by less than 20 points, and 10 percent were won by less than 10 points.\(^13\) The share of seats won by less than 20 points fell to 11 percent in 2002 before rising again to 16 percent in 2004. The share of districts where the winner had a 10-point margin of victory dropped to 5 percent in 2002 and 6 percent in 2004. The presence of two chambers in all state legislatures, except Nebraska, makes possible the adoption of a different sort of incumbent protection gerrymander, where a party cedes control over redistricting in one chamber in exchange for control over the other. These trades tend to occur when each party has a pre-redistricting majority in a chamber, thus splitting control of the redistricting process. The majority party leadership and membership in each chamber may prefer maintaining their majority in one chamber of the legislature to the uncertain chance of gaining a majority in both through court action resulting from gridlock. The leadership in each chamber has a strong incentive to make a deal with its opposite-party counterpart in the other chamber as legislative leaders in both chambers stand to lose majority status and powerful institutional positions --- \(^9\) The winner’s margin of victory was greater than 10 percent in all but two elections. See *Congressional Elections: 1946–1996*: 282, 287, 292, 297, 303. \(^10\) As it existed at the time of the 2000 election, 36 percent of the residents of Maryland’s Eighth District voted for President Bush. The new map adopted before the 2002 election dropped the percentage of Bush supporters within the Eighth District to 31 percent. The share of Bush voters within Maryland’s Second District similarly fell from 55 to 41 percent. See Brian Nutting and H. Amy Stern *supra* note 6: 448; David Hawkings and Brian Nutting *supra* note 6: 454. \(^11\) In 2000, the percentage of districts won by 20 points or less of the major-party vote, rather than the total vote figures used in the text, was 21 percent. \(^12\) In Table I, the California plan is labeled a Democratic plan because it also entrenched the legislature’s Democratic majority. \(^13\) The share of Assembly seats won by less than 20 percent of the major-party vote, rather than the total vote used in the text, was 18 percent. such as committee chairs. The level of control ceded to the other party can vary from state to state based upon factors other than split control. For example, governors who have veto power over redistricting plans can gain leverage even if the leaders of each house of the divided legislature have agreed to split the redistricting spoils through a cross-chamber logroll. In Indiana, Kentucky, Nevada, and New York, lower house Democrats drew maps for their chamber while upper house Republicans drew maps for theirs. In New York, decades of cross-chamber deals between the powerful Democratic House Speaker and Republican Senate President have almost become an Empire State tradition. Dividing the spoils over redistricting has likely aided the successful efforts by House Democrats and Senate Republicans to maintain control of their respective chambers over the past several decades. Not all such situations resulted in bipartisan logrolls. New Mexico legislators could not broker a cross-chamber compromise for the lower house and redistricting fell to a court. Not all states use the legislative process for state legislative redistricting. Nineteen states use a commission at some stage of the redistricting process, either as a primary authority or as a backup to the legislative process if stalemate occurs. McDonald broadly characterizes these institutions based on their membership and rules as either producing partisan or incumbent protection gerrymanders. In either case, the result may be reduced electoral competition. Two exceptions are Arizona and Iowa, which use a primarily nonpartisan process for redistricting. Racial redistricting may also undercut electoral competition. The creation or protection of new African-American or Latino majority districts may aid, intentionally or not, the adoption of an anti-competitive plan favorable to Republicans. Since most African Americans and Latinos vote heavily Democratic, majority-minority districts are usually uncompetitive, heavily Democratic bailiwicks. The placement of so many Democratic voters into a few majority-minority districts may greatly aid Republican efforts elsewhere. In short, racial redistricting has the potential to force the creation of greater numbers of safe minority Democratic districts and safe white Republican districts than would otherwise exist. THE DATASET Our examination of competition in state legislative elections includes data from State House elections from 2000 and 2004 in 37 states. Almost all of the 37 states use single-member districts in order to elect members of the State House. Washington State utilizes two-member districts with a numbered post system for each seat, so its elections are easily compared with those in states with single-member districts. During the 2000 election, Arkansas utilized single-member districts to elect the State House except for one multimember district with three representatives. These representatives were also elected by a numbered-post system so the 2000 election results can be compared to --- 14 Michael P. McDonald, “A Comparative Analysis of Redistricting Institutions in the United States, 2001–02,” *State Politics and Policy Quarterly* 4: 4 (Winter 2004): 371–95. 15 One important exception is the Latino population of Florida. Florida Latinos, especially Cuban Americans, appear more likely to vote Republican than Latinos elsewhere in the country. 16 David Lublin, *The Paradox of Representation: Racial Gerrymandering and Minority Interests in Congress* (Princeton University Press, 1997): 103–19; David Lublin, *The Republican South: Democratization and Partisan Change* (Princeton University Press, 2004): 99–115; David Lublin and D. Stephen Voss, “The Partisan Impact of Voting Rights Law,” *Stanford Law Review* 50 (February 1998): 765–77; David Lublin and D. Stephen Voss, “Racial Redistricting and Realignment in Southern State Legislatures,” *American Journal of Political Science* 4 (October 2000): 792–810; Carol M. Swain, *Black Faces, Black Interests: The Representation of African Americans in Congress*, Enlarged Edition (Harvard University Press, 1995): 197–206; Charles Cameron, David Epstein, and Sharyn O’Halloran, “Do Majority-Minority Districts Maximize Substantive Representation?,” *American Political Science Review* 90: 4 (December 1996): 794–812; David Lublin and D. Stephen Voss, “The Missing Middle: Why Median-Voter Theory Can’t Save Democrats from Singing the Boll-Weevil Blues,” *Journal of Politics* 65: 1 (March 2003). But see John R. Petrocik and Scott W. Desposato, “The Partisan Consequences of Majority-Minority Districting in the South, 1992 and 1994,” *Journal of Politics* 60 (October 1998): 613–33; Kenneth W. Shotts, “Does Racial Redistricting Cause Conservative Policy Outcomes? Policy Preferences of Southern Democrats in the 1980s and 1990s” *Journal of Politics* 65: 1 (February 2003) 216–26. the 2004 results with single-member districts only.\textsuperscript{17} Thirteen states are not included in the analysis. Nebraska’s unicameral legislature, called the Senate, is elected on a nonpartisan basis so its electoral contests are quite different from the partisan contests held in other states. Arizona, Maryland, New Hampshire, North Dakota, South Dakota, Vermont, and West Virginia used multimember districts or a mixture of single-member and multimember districts for State House elections in 2000 and 2004. Alabama, Louisiana, Mississippi, New Jersey, and Virginia did not hold state legislative elections in 2004 and are excluded from the analysis. Election results from Wisconsin were not available for 2000 so it is excluded from discussions of the 2000 elections or comparisons of results from 2000 and 2004. North Carolina used multimember districts in 2000 but was forced by a state court decision to switch to single-member districts before the 2004 election,\textsuperscript{18} so it is likewise excluded from analyses involving the 2000 elections. The process of redistricting may spur greater competition over the short term even if the partisan composition of a district remains unchanged. Redistricting frequently disrupts existing links between incumbent representatives and their constituents. Representatives may gain unfamiliar constituents from outside their districts and lose familiar constituents to other districts. Much of the incumbency advantage in congressional elections may result from greater knowledge by voters of the incumbents than of challengers.\textsuperscript{19} As a result, redistricting may weaken incumbents by reducing the share of constituents who are familiar with them. Following a redistricting, strong challengers to incumbents are more likely to emerge.\textsuperscript{20} However, the weakening of the incumbency advantage will likely be temporary as the representative becomes more familiar to his or her new constituents. If one seeks to discern the important potential impacts of partisan and incumbent protection gerrymandering on competition and electoral outcomes, one must allow for the possibility that electoral competition will have increased during the election held immediately after redistricting relative to the one prior to redistricting. Competition may have increased simply due to the severing of established ties between representatives and their constituents. Elections held after the first post-redistricting elections are less likely to exhibit this effect as new incumbents establish ties within their new districts. While the effect of scrambling constituents on the incumbency advantage may not completely dissipate by the time of the second post-redistricting election, it is likely reduced as incumbents have had greater opportunity to build name recognition in the new portions of their districts. In almost all of the 37 states included here, the 2004 election was the second scheduled general election after the regular decennial redistricting. Georgia, Maine, Montana, and North Carolina constitute the exceptions. According to their state constitutions, Maine and Montana redistrict in years ending in “3.” Georgia adopted a new map for the 2002 elections but it was successfully challenged in federal court in 2004.\textsuperscript{21} The court imposed its own new map before the 2004 elections after the state legislature failed to meet the Court’s deadline to enact a new, more acceptable plan.\textsuperscript{22} North Carolina drew a new map for \textsuperscript{17} Candidates for Arkansas State House Districts 12, 13, and 14 were combined into a single district. While all State House candidates ran at-large in the district, candidates had to declare in which of the three districts they sought election in a manner parallel to the numbered-post system used in Washington State. William Lilley III, Laurence J. DeFranco, Mark F. Bernstein, \textit{The Almanac of State Legislatures}, Second Edition (Washington: Congressional Quarterly, 1998): 21–6. \textsuperscript{18} \textit{Stephenson v. Bartlett}, Supreme Court of North Carolina, No. 94PA02-2 (16 July 2003). \textsuperscript{19} Gary C. Jacobson, \textit{The Politics of Congressional Elections}, Fifth Edition (New York: Addison Wesley Longman, 2001): 110–21; Scott W. Desposato and John R. Petrocik, “The Variable Incumbency Advantage: New Voters, Redistricting, and the Personal Vote,” \textit{American Journal of Political Science} 47(January 2003): 18–32; Petrocik and Desposato \textit{supra} note 16. \textsuperscript{20} Marc J. Hetherington, Bruce A. Larson, and Suzanne Globetti, “The Redistricting Cycle and Strategic Candidate Decisions in U.S. House Races,” \textit{Journal of Politics}. 65(2003): 1221–1235. \textsuperscript{21} \textit{Larios v. Cox}, 300 F. Supp. 2d 1320 (N.D. Ga. Feb 10, 2004). \textsuperscript{22} \textit{Larios v. Cox}, 306 F. Supp. 2d 1212 (Mar. 1, 2004); \textit{Larios v. Cox}, 306 F. Supp. 2d 1214 (Mar. 2, 2004); \textit{Larios v. Cox}, No. 1:03-CV-693-CAP (N.D. Ga. Mar. 15, 2004); \textit{Larios v. Cox}, 300 F. Supp. 2d 1320, 2004 WL 867768 (N.D. Ga. Apr 15, 2004). 2004 after its highest court ruled that the plan used for the 2002 elections violated the state constitution through its use of multimember districts and its unnecessary division of counties.\textsuperscript{23} **THE LEVEL OF COMPETITION IN STATE HOUSE ELECTIONS** We measure electoral competitiveness in two ways: (1) “contestedness,” or the share of seats with both a Democratic and Republican candidate, and (2) “competitiveness,” or the share of seats in which the winner received less than 60 percent of the major-party vote.\textsuperscript{24} The presence of more than one major-party candidate is crucial to competition and the idea of democratic choice. Democrats and Republicans dominate state legislative elections. In the states examined here, independents won three, or 0.08 percent, of races held in 2000 and two, or 0.05 percent of races held in 2004. Moreover, voters are often familiar with the general philosophical differences between the two major parties that have dominated American politics since the Civil War. The presence of a candidate from each major party on the ballot therefore adds greatly to the ability of a voter to express a meaningful choice. Candidates are also more likely to lose when they have a major-party opponent on the ballot. The percentage of the vote received by the winner in a contested election will be smaller than in a contested race assuming that the loser receives at least one vote. Our measure of competitiveness captures the proportion of seats that are closely contested as reflected in vote percentage for the winning candidate. Elections in which the victor wins by a relatively small amount are more competitive than elections won by a large amount. Following congressional elections scholar Gary Jacobson’s past practice in assessing competitiveness in congressional elections, we use 60 percent as a threshold for determining which seats are marginal in State House elections.\textsuperscript{25} We recognize that the choice of any particular cutoff point between “competitive” and “uncompetitive” seats is somewhat arbitrary, as competition is a continuum. An election in which the winner receives 59 percent of the vote is only two percentage points more competitive than an election in which the winner receives 61 percent of the vote. Moreover, a decline in the winner’s vote share from 63 to 61 percent is comparable to a decline from 61 to 59 percent, but only in the latter instance would a district move from a classification of “uncompetitive” to “competitive.” Table 1 presents the percentage of State House seats with two major-party candidates and the percentage of seats where the winner received less than 60 percent of the major-party vote in each state in 2000 and 2004. The order of the states is from highest to lowest according to the share of marginal seats as measured by the percentage in which the winner received less than 60 percent of the major-party vote in 2004. The table also notes where the state was a covered jurisdiction under Section 5 of the Voting Rights Act (VRA) in 2000. Non-covered jurisdictions may have to engage in racial redistricting to comply with Section 2 of the VRA. However, the federal supervision provided by Section 5 of the VRA historically made it easier to compel jurisdictions to draw new majority-minority districts or to protect existing ones.\textsuperscript{26} Table 1 also notes whether the State House map used in 2004 was a Democratic (D) or Republican \textsuperscript{23} Stephenson v. Bartlett \textit{supra} note 20. \textsuperscript{24} For the few seats won by independents, the vote share of the winner is calculated as a percentage of the total vote. \textsuperscript{25} Jacobson \textit{supra} note 21 at 27–8. The share of districts won by less 60 percent of the major-party vote in 2004 is highly correlated with the share of districts won by less than 55 percent ($r = 0.90$) or 52 percent ($r = 0.80$). \textsuperscript{26} The Supreme Court has gradually limited the ability of the Department of Justice (DOJ) to use its Section 5 powers to force jurisdictions to draw additional majority-minority districts. In \textit{Beer v. United States}, 425 U.S. 130 (1976), the Supreme Court declared that only retrogression, rather than a failure to create possible new majority-minority districts, constituted an abridgement of minority voting rights within the meaning of Section 5. In \textit{Reno v. Bossier Parish}, 520 U.S. 471 (1997) and 528 U.S. 320 (2000), the Court said that DOJ could not use its Section 5 power to object to a redistricting plan in a covered jurisdiction to enforce Section 2. In \textit{Georgia v. Ashcroft}, 000 U.S. 02-182 (2003), the Court ruled that non-majority-minority districts might be sufficient under certain circumstances to meet a jurisdiction’s burden to prevent retrogression under Section 5, even if the percentage of minorities declined within individual districts. Table 1. Competitiveness of State House Elections in 37 States | State | Percentage of seats with two major-party candidates | Percentage of seats in which the winner received under 60% | VRA covered (all or part) | Plan type, 2004 | |----------------|-----------------------------------------------------|-----------------------------------------------------------|---------------------------|-----------------| | | 2000 | 2004 | 2000 | 2004 | | | | Maine | 69 | 96 | 38 | 59 | I | | | Minnesota | 89 | 99 | 34 | 49 | O | | | Washington | 80 | 82 | 32 | 43 | I | | | Montana | 75 | 75 | 40 | 42 | D | | | Nevada | 69 | 81 | 38 | 40 | D | | | Oregon | 77 | 77 | 42 | 40 | D | | | Iowa | 64 | 62 | 37 | 39 | O | | | Hawaii | 73 | 94 | 41 | 37 | I | | | Colorado | 68 | 75 | 38 | 37 | O | | | Alaska | 58 | 73 | 15 | 35 | All | O | | Kentucky | 32 | 52 | 17 | 35 | D | | | Michigan | 98 | 98 | 19 | 35 | Part | R | | Wisconsin | 57 | | 33 | | O | | | Idaho | 50 | 53 | 19 | 30 | I | | | Oklahoma | 55 | 63 | 24 | 28 | R | | | Rhode Island | 33 | 63 | 11 | 25 | D | | | Missouri | 52 | 59 | 20 | 25 | R | | | Ohio | 85 | 75 | 34 | 24 | R | | | Utah | 81 | 60 | 36 | 23 | R | | | Indiana | 58 | 57 | 23 | 22 | D | | | Delaware | 54 | 51 | 17 | 22 | R | | | Wyoming | 48 | 42 | 28 | 22 | R | | | Tennessee | 36 | 49 | 14 | 21 | I | | | Connecticut | 60 | 59 | 19 | 21 | I | | | Kansas | 54 | 45 | 17 | 21 | R | | | New Mexico | 59 | 39 | 29 | 20 | D | | | Texas | 28 | 40 | 12 | 19 | All | R | | North Carolina | 41 | | 18 | | Part | D | | Arkansas | 28 | 27 | 17 | 18 | D | | | Georgia | 32 | 39 | 14 | 17 | All | R | | California | 98 | 93 | 18 | 16 | Part | D | | Florida | 58 | 30 | 33 | 15 | Part | R | | Pennsylvania | 53 | 50 | 14 | 14 | R | | | New York | 72 | 65 | 11 | 13 | Part | D | | Illinois | 48 | 48 | 13 | 11 | D | | | South Carolina | 35 | 23 | 15 | 8 | All | R | | Massachusetts | 29 | 51 | 7 | 8 | D | | Average: all states 59 61 24 27 Average: all states but GA, ME, and MT 59 60 23 25 Notes: “D” or “R” denote maps drawn by Democrats or Republicans where they controlled the process, were in a cross-chamber log-roll in a divided legislature, or a court adopted their preferred map. “I” refers to an incumbent protection map. “O” refers to a process that did not result in an overtly political map. (R) partisan plan.\textsuperscript{27} Most seats are not very competitive. In the average state in either 2000 or 2004, around one-quarter of State House elections were won with less than 60 percent of the vote. Approximately 40 per- \textsuperscript{27} Determining if a map is a partisan plan can be difficult. We relied primarily on McDonald \textit{supra} note 14 at 371–95, and information posted at www.fairvote.org, to determine if the State House plan utilized in 2004 was a partisan plan. The authors would appreciate additional information if a reader believes that a state has been misclassified. cent of seats were won without major-party opposition in both years. The level of competition in State House elections varied dramatically across states in both 2000 and 2004. Only 28 percent of seats in Arkansas and Texas in 2000 had two major-party candidates as did only 23 percent of South Carolina seats in 2004. In contrast, 98 percent of Michigan and California districts had two major-party candidates in 2000 as did 99 percent of Minnesota districts in 2004. In the average state, roughly 40 percent of districts lacked candidates from both major parties in 2000 and 2004. The variation in the share of marginal seats, defined as the winner receiving less than 60 percent of the major-party vote, was also considerable. Massachusetts saw the fewest marginal contests in both 2000 and 2004 with only 7 and 8 percent, respectively, having marginal status. South Carolina had a similarly low share of seats where the winner received less than 60 percent in 2004. Around one-quarter of seats were marginal in the average state in either 2000 or 2004.\(^{28}\) Several states experienced great changes in competitiveness between 2000 and 2004. The share of competitive districts fell precipitously in Florida where the percentage of seats with two-major party candidates tumbled by 28 percent and the percentage of marginal seats dropped 18 percent. On the other hand, the share of seats with two major-party candidates rose most strongly in Rhode Island, which experienced an impressive 30 percent gain in the share of seats where both Democrats and Republicans fielded candidates.\(^{29}\) Maine had the greatest increase in the share of marginal seats, with the percentage of representatives winning by less than 20 points rising by 21 percent between 2000 and 2004. We might expect a rise in the number of competitive elections in Maine because the state had a new map in 2004. Among those states without a new map, Michigan had the greatest increase in the share of marginal seats, rising 16 percentage points from 19 to 35 percent. While the share of competitive seats changed greatly in many states, the average level of competition was little altered between 2000 and 2004. The statistics at the bottom of Table 1 reveal that the average share of seats with two major-party candidates crept up by two points across all of the states for which data were available for both years. The average percentage of marginal seats rose by three percent. Excluding the three states (Georgia, Maine, and Montana) that adopted new redistricting plans between 2002 and 2004 reduces these small changes even further. **EXPLAINING VARIATION IN ELECTORAL COMPETITION** We investigate factors related to the presence of two major-party candidates and the competitiveness of the elections through a multivariate regression analysis. The scope of our analysis extends to the 37 states that conducted elections for the lower state legislative chamber in single-member districts in 2004. The unit of our analysis is the state, not the individual district, as we are interested in factors that affect statewide rates of competition, such as the presence of a politically motivated redistricting, the effects of the Voting Rights Act, and the average population size of the districts. Our analysis is constrained by the small number of observations, and we have made some compromises with regards to variables included in our regressions so that we might increase the degrees of freedom in our analysis. The two dependent variables in our analysis are the percentage of 2004 state legislative elections with two major-party candidates and the percentage of elections won by less than twenty percentage points, as presented in Table 1. We tried alternative competitiveness measures, such as a ten-point and four-point spread between the top two candidates. These alternative models demonstrated the same patterns \(^{28}\) Excluding the states that redistricted between the 2002 and 2004 elections does not greatly alter the mean number of either marginal seats or seats with two major-party candidates. \(^{29}\) Heightened competition might be explained by the shrinking of the size of the Rhode Island legislature, which made seats scarcer and placed some incumbents in the same district. However, the 2004 elections were the second set of elections held for the smaller legislature. we describe here, albeit slightly less strongly statistically significant. This finding is expected, as there is less variation among states using these narrower ranges of competitiveness spreads and thus less to explain from a statistical standpoint. We are primarily interested in the effect of redistricting on state legislative competition. There are two important constraints related to redistricting that may affect the levels of competition within a state, the political motivations of the map and the drawing of special majority-minority districts to satisfy the Voting Rights Act. We argue that there are two important types of gerrymanders that may reduce electoral competition, partisan and incumbent protection gerrymanders. These state legislative maps may be contrasted with those that are specifically drawn by courts or are remanded by a court back to a redistricting authority to fix specific state constitutional deficiencies, such as those in Alaska, Colorado, Georgia, Minnesota, and Wisconsin. Places in these categories theoretically are neutral and benefit neither political parties nor incumbents. However, we should be careful in categorizing all court approved plans as neutral.\textsuperscript{30} Courts may adopt partisan or incumbent protection maps offered through the regular political process, as happened in Missouri, North Carolina (in 2002), New Hampshire, New Mexico, and South Carolina (in 2002). State government in North Carolina and South Carolina replaced court ordered incumbent protection maps with partisan maps in 2004. Redistricting institutions in Arizona and Iowa produced relatively neutral maps without overt political benefits for the parties or incumbents.\textsuperscript{31} In operationalizing our measure, we combine partisan and incumbent protection gerrymanders into one category of \textit{Political Map}.\textsuperscript{32} Another important factor affecting competition in state legislative elections is drawing of non-competitive minority districts to satisfy Section 5 of the \textit{Voting Rights Act}. There are nine states that are fully covered, though of these nine only Alaska, Georgia, South Carolina, and Texas are within our data set. To these states we add two partially covered states, Florida and North Carolina, which have sizable minority populations within the states’ covered jurisdictions. We do not categorize other partially covered states, namely California, Michigan and New York, because only small, non-populous areas are covered or the populous areas that are covered are located in uncompetitive areas, such as the three covered boroughs of New York City.\textsuperscript{33} We include two control variables in the analysis, the \textit{Average District Size} of a state legislative district and a dummy variable indicating if a state used a \textit{New Map} in 2004. We expect relatively populous average sized districts to be associated with a larger supply of candidates, and thus related to a higher percentage of contested races. Populous districts may, however, retard competitive elections as challengers must raise a larger amount of money to contact more constituents. We construct \textit{Average District Size} by dividing the 2004 voting-eligible population of the state (in units of thousands of people) by the number of state legislative districts.\textsuperscript{34} From our previous discussion, we expect a newly redistricted map for 2004 in four states—Georgia, North Carolina, Maine and Montana—to be related to more competitive elections than other states that did not redistrict after the 2002 election.\textsuperscript{35} \textsuperscript{30} In our regression analysis, an indicator variable identifying court ordered plans was not close to statistical significance. \textsuperscript{31} Michael P. McDonald, “A Comparative Analysis of Redistricting Institutions in the United States, 2001–02,” \textit{State Politics and Policy Quarterly} 4: 4(Winter 2004): 371–95 \textsuperscript{32} An unreported analysis identifying partisan and incumbent protection maps separately indicated no statistical difference between partisan and incumbency protection gerrymanders, consistent with Owen and Grofman’s (1988) theoretical assertion that these types of gerrymanders both produce non-competitive elections. \textsuperscript{33} However, for the sake of accuracy, these states are listed as partly covered in Table 1. \textsuperscript{34} For a description of state level voting-eligible population, see Michael P. McDonald. 2002. “The Turnout Rate Among Eligible Voters for U.S. States, 1980–2000.” \textit{State Politics and Policy Quarterly} 2 (2): 199–212. The authors constructed 2004 voting-eligible numbers. \textsuperscript{35} An alternative method of controlling for states with a new redistricting map in 2004 is to drop them from the analysis. We tried this model specification and found substantially the same results reported in Table 2, albeit slightly fewer statistically significant coefficients. Part of the small decrease in the observed statistical significance is related to the dropping four observations from an already small number of thirty-seven observations. We decided to gain three degrees of freedom in the model by including the indicator variable identifying states with a new state legislative map in 2004. The regression analysis results are presented in Table 2. Results are presented for two models: the percentage of 2004 state legislative races with two major-party candidates and the percentage of state legislative elections won by less than twenty percentage points. We present the coefficients and indicate if the coefficients are statistically significant at the $p < 0.10$ and $p < 0.05$ levels, and present the standard errors (SE) associated with the coefficients. States with a political map are predicted to have approximately 13 percent fewer races with two major-party candidates than other states. This result can only be regarded as suggestive rather than conclusive because the coefficient on *Political Map* achieved statistical significance only at the $p < 0.10$ level. However, the coefficients for the variables controlling for states mostly covered by the *Voting Rights Act* and *Average District Size* both are statistically significant at the $p < 0.05$ level so we can be somewhat more certain about our conclusions. The population of a district has a relatively small impact on the percentage of seats with two major-party candidates. An increase of 10,000 in the population of a district raises the predicted share of seats with two major-party candidates by 1.3 percent. In contrast, states mostly covered by the Voting Rights Act are expected to have slightly more than 31 percent fewer races with two major-party candidates than other states. The *New Map* coefficient is just outside the $p < 0.10$ level and in the expected direction, so the relationship estimated here is strongly suggestive that a new map indeed induces challengers to emerge, relative to other states that did not redistrict after 2002. Both key variables measuring the impact of a political map and coverage by the Voting Rights Act have a strong impact on the share of state Houses seats won by less than twenty points. As the model of marginal seats indicates, political maps reduce the share of seats which are marginal by over 9 percent. That corresponds to a nine-seat reduction in the number of seats won by under 60% percent of the major-party vote in a 100-seat legislative body. States mostly covered by the Voting Rights Act have almost 14 percent fewer marginal seats than other states—the equivalent of 14 fewer marginal seats in the same 100-seat chamber. Both findings are statistically significant at the $p < 0.05$ level. A new redistricting map for 2004 is also associated with fewer competitive races, though the results are statistically significant just outside the $p < 0.05$ level. Unlike the model for two candidates, the average population size of a district is neither substantively nor statistically significant, though it is in the predicted direction, opposite of that of the two candidate model. One should perhaps exhibit caution in attributing the decline in competition to the Voting Rights Act. Historically, the South has tra- --- 36 The small number of observations raises the possibility that the results are confounded by outlier observations. We ran 1,000 bootstrap simulations and found substantially similar results as presented in Table 2. ditionally had lower levels of general election competition than other states. All of the states coded as covered by the Voting Rights Act are located in the South. Except for Arkansas and Tennessee, all of the southern states included in the data set are categorized as covered by the Voting Rights Act. Of course, the low levels of southern competition may be attributed to the historic overwhelming dominance of the Democrats and the great weakness of the Republicans, conditions which certainly no longer describe southern politics. On the other hand, racial redistricting spurred by the Voting Rights Act systematically created safe Democratic districts and removed Democratic voters from surrounding districts.\textsuperscript{37} Models not presented here tested various measures for the distribution of voters within a state. If Democrats and Republicans are highly segregated within a state, most districts may be safe for one party without any particularly strong effort to manipulate the lines due to partisanship or racial redistricting. Conversely, if Democrats and Republicans are evenly dispersed throughout a state, many districts may be competitive. Even if a party wishes to gerrymander the state, they may find their task more difficult if members of the two parties are sufficiently intermixed. Two separate measures were used to test the impact of voter distribution. One crude measure was simply the margin of victory for the winning 2000 presidential candidate within a state. Large margins of victory may indicate that politics is heavily dominated by one party with only weak competition. Moreover, candidates who win by a healthy margin statewide often easily carry a disproportionate number of single-member districts. This measure ranged from 0.01 percent (Bush’s narrow victory in Florida) to 40.49 percent (Bush’s easy win in Utah). A second, more sophisticated measure was based on the population-weighted average of the 2000 margin of victory within a state’s counties. If each county were evenly divided between Bush and Gore, then the variable would take a value of zero. However, the measure takes larger values in states where Bush or Gore won many counties, especially populous ones, by sizeable margins. Unlike the first, cruder measure, the county breakdown gives a sense of the distribution of voters within a state—not just the overall level of support for the winning party. This measure ranged from 5.69 percent in Iowa to 18.58 percent in Rhode Island with the average state taking a value of 10.97 percent (standard deviation: 3.72). Perhaps surprisingly, neither measure of the distribution of voters came close to achieving statistical significance in any model tested. This result may reflect that neither measure captures the distribution of voters very well. Voters may also be sufficiently unevenly distributed that creative mapmakers, especially with the aid of sophisticated computer mapping programs, do not find gerrymandering for parties or incumbents too difficult even in states where voters are comparatively evenly distributed. Conversely, in less competitive states, state parties are sufficiently able to distance themselves from the national party platform and offer policies and candidates that appeal to voters within the state, consistent with findings by Erikson, Wright, and McIver.\textsuperscript{38} We further tested models that investigated the relationship between both court-drawn plans and the presence of term limits with our two measures of competitiveness. We again found no statistical relationship, and due to the small number of states to observe, we choose not to include any of these statistically insignificant variables in the model we present. We were somewhat surprised that term limits were not statistically related to competition, though we believe that further investigation into the percentage of open seats—a variable we were unable to construct with our data—may uncover such a relationship. \section*{CONCLUSION} This article provides a first step toward showing that both partisan redistricting plans \textsuperscript{37} Lublin \textit{supra} 16 at 75–81, 99–115; Charles S. Bullock, III (2005). “The GOP Comes of Age in the South.” \textit{Election Law Journal} 4(3): 207–10. \textsuperscript{38} Gerald C. Wright, Robert S. Erikson, and John P. McIver (1987). “Popular Control of Public Policy in the American States.” \textit{American Journal of Political Science}. 31(4): 980–1001. and racial redistricting can reduce the overall level share of competitive State House seats. The findings are based on the results from the 2004 State House elections for the 37 states included in the study. The impact of any individual redistricting plan can vary substantially from the overall tendency of either partisan plans or racial redistricting to undercut competition. Nevertheless, the multivariate analysis indicates that partisan plans reduce the proportion of marginal seats, defined as seats won by less than 20 points, in State Houses. Additionally, it suggests that states with covered jurisdictions for purposes of Section 5 of the Voting Rights Act see fewer marginal districts and fewer districts with two major-party candidates. We also see more contested elections as the average population size district increases. New redistricting maps are also related to higher levels of competition. The potential impact of racial redistricting may decline in the future due to the Supreme Court’s 2003 decision in *Georgia v. Ashcroft*. That decision indicated that states may reduce the share of minorities in various districts as long as overall minority influence and opportunity are enhanced. Even the dissenters from the majority opinion agreed that the percentage of minorities required in any district should be determined by the share of minorities needed to elect a minority-preferred candidate, rather than arbitrarily set at fifty percent or higher.\(^{39}\) Reducing the share of minorities in districts designed to allow minorities to elect candidates of choice may also decrease the number of packed Democratic districts and increase competition. However, the widespread use of partisan redistricting plans for State Houses seems likely to have a negative influence on competition over the long term. In *Davis v. Bandemer*,\(^{40}\) the Supreme Court ruled in 1986 that partisan gerrymandering is a justiciable issue under the Equal Protection Clause.\(^{41}\) However, federal courts have yet to overturn a redistricting plan on partisan grounds. Recently, in *Veith v. Jubelirer*,\(^{42}\) the Supreme Court ruled that no satisfactory legal standard or scholarly measure has emerged to permit a court to determine if it should overturn a plan. Indeed, Justice Scalia wrote for a plurality that it is impossible to come up with such a satisfactory standard; consequently, he believes that partisan gerrymandering should not be justiciable and the Court should overrule *Bandemer*.\(^{43}\) At least for now, partisan redistricting plans seem likely to continue to flourish in the United States. Is this really a problem for electoral competition? Gerrymandering has its origins in the early days of the Republic. Law Professor Daniel Lowenstein agrees with Justice O’Connor that partisan gerrymandering is a “self limiting enterprise.”\(^{44}\) More specifically, it may be difficult for political minorities to construct redistricting plans that protect their majority over the long term without risk of weakening their safe seats. The failure of the Georgia Democrats, who had won legislative majorities with a minority of votes in several elections during the 1990s, to hold on to their majority in 2002 despite aggressive efforts to protect it through redistricting seemingly confirms this assertion. Popular majorities can additionally protect their interests by electing governors. In states with the initiative process, majorities can even take control of redistricting away from legislative majorities as they have done in Arizona. In a nutshell, Lowenstein and O’Connor both doubt that gerrymandering will ever cause the United States to arrive at the point where the government has essentially dissolved the peo- --- \(^{39}\) See Bernard Grofman, Lisa Handley and David Lublin, “Drawing Effective Minority Districts: A Conceptual Framework and Some Empirical Evidence,” *North Carolina Law Review* 79: 5 (June 2001) for a discussion of why selected minority-preferred candidates may be able to win election from some districts where they do not constitute a majority. \(^{40}\) 478 U.S. 109 (1986). \(^{41}\) Bernard Grofman, “Toward a Coherent Theory of Gerrymandering: *Bandemer* and *Thornburg*” in Bernard Grofman, ed., *Political Gerrymandering and the Courts* (New York: Agathon Press, 1990): 29–63; Daniel Hays Lowenstein, “Bandemer’s Gap: Gerrymandering and Equal Protection” in Bernard Grofman, ed., *Political Gerrymandering and the Courts* (New York: Agathon Press, 1990): 64–116. \(^{42}\) *Veith v. Jubelirer*, 541 U.S. 267 (2004). \(^{43}\) Justice Scalia’s opinion was joined by Chief Justice Rehnquist and Justices O’Connor and Thomas. \(^{44}\) Lowenstein *supra* note 51 at 88–9; *Davis v. Bandemer* (1986), 478 U.S. 109, 152. O’Connor cites the work of political scientist Bruce Cain to support her claim; see Cain *supra* note 4 at 151–9. ple and elected another, to paraphrase Bertolt Brecht.\textsuperscript{45} Nathaniel Persily further argues that partisan gerrymandering has not limited electoral competition because even if the share of districts won by close margins declines, legislatures may still be closely divided with competition for control remaining quite fierce.\textsuperscript{46} Grofman and Jacobson point out that the size of U.S. House majorities has been quite small by historical standards in recent years.\textsuperscript{47} Indeed, the tight nature of the 2000 and 2004 presidential elections along with heightened turnout further suggests that national politics remain quite competitive. Of course, the tight nature of political competition in the United States makes the ability to manipulate political boundaries all the more important. Turnout statistics further indicate that voters are more likely to vote when they are participating in a close contest. Turnout in 2004 moved upward primarily in the battleground states and states with other tightly contested high profile races.\textsuperscript{48} Voters may unsurprisingly feel left out of an election that turns on close contests in only a few seats—a familiar complaint against the Electoral College by residents of safe states.\textsuperscript{49} Moreover, many state legislatures are far from closely divided. Like Lowenstein, Richard Pildes is uncomfortable with the Supreme Court’s reliance on the Equal Protection Clause in the partisan gerrymandering and other redistricting and election law cases more broadly.\textsuperscript{50} However, Pildes does not believe that the judiciary should revert to regarding partisan gerrymandering as a political question. Along with his coauthor Samuel Issacharoff, Pildes believes that the Court should instead ground its review of legislative districting, and its rulings involving election law more broadly, in the goal of protecting the democratic process and marketplace of ideas against political elites who wish to entrench themselves through the manipulation of district lines and other electoral ground rules.\textsuperscript{51} At the core of the Issacharoff and Pildes argument is a deep concern over the manipulation of political institutions, such as electoral district boundaries, by current officials in order to entrench themselves in power and a belief that the judiciary should serve as a check on these anti-democratic efforts. This paper focuses only on an interval of 2000 to 2004, so it can only hint at long-term trends in state legislative electoral competition. The results nevertheless suggest that minimizing political considerations during redistricting can result in greater electoral competition. In Minnesota and Wisconsin, courts drew maps when the regular redistricting process failed. These maps were largely received as being fair by leaders of both political parties and were not seen as protecting incumbents, as evidenced in the resulting electoral competition.\textsuperscript{52} In Alaska, Colorado, and Idaho, the state Supreme Court remanded redistricting back to the states’ redistricting commissions to fix legal violations. \textsuperscript{45} Bertolt Brecht’s poem, “The Solution,” was a critique of the East German government’s repression of the 1953 uprising against it. It reads: “After the uprising of the 17th June / The Secretary of the Writers Union / Had leaflets distributed in the Stalinallee / Stating that the people / Had forfeited the confidence of the government / And could win it back only / By redoubled efforts. / Would it not be easier / In that case for the government / To dissolve the people / And elect another?” See John Willett and Ralph Manheim, eds., \textit{Poems by Bertolt Brecht} (Methuen 1976). \textsuperscript{46} Nathaniel Persily, “In Defense of Foxes Guarding Henhouses: The Case for Judicial Acquiescence to Incumbent Protecting Gerrymandering,” \textit{116 Harvard Law Review} (2002): 649, 656. \textsuperscript{47} Grofman and Jacobson \textit{supra} note 2 at 4–5. \textsuperscript{48} Cox, Gary W. and Michael C. Munger. 1989. “Closeness, Expenditures, and Turnout in the 1982 U.S. House Elections.” \textit{The American Political Science Review}, Vol. 83, No. 1, pp. 217–231. Michael P. McDonald, “Up, Up, and Away! Voter Participation in the 2004 Presidential Election,” \textit{The Forum} 2: 4(December 2004), available at: (www.bepress.com/forum/vol2/iss4/art4/). \textsuperscript{49} Lawrence Longley and Neal R. Pierce, \textit{The Electoral College Primer 2000} (Yale University Press 1999). \textsuperscript{50} Richard H. Pildes, “The Supreme Court 2003 Term: Forward: The Constitutionalization of Democratic Politics,” \textit{Harvard Law Review} 118: 1(November 2004): 28–154. \textsuperscript{51} \textit{Id.} at 54–5; Samuel Issacharoff and Richard H. Pildes, “Politics As Markets: Partisan Lockups of the Democratic Process,” \textit{Stanford Law Review} 50: 3(February 1998): 643–717; Samuel Issacharoff, “Private Parties with Public Purposes: Political Parties, Associational Freedoms, and Partisan Competition,” \textit{Columbia Law Review} 101: 2(March 2001): 274–313; Samuel Issacharoff, “Gerrymandering and Political Cartels,” \textit{Harvard Law Review} 116: 2(December 2002): 601–48. \textsuperscript{52} Dane Smith. 2002. “A State Rejiggered: New Maps for Congress, Legislature May Change Political Fortunes.” \textit{Minneapolis Star Tribune}, March 20, 2002: 1A. JR Ross. “Federal Court Redraws Wisconsin Legislative Districts.” \textit{The Associated Press State and Local Wire}, May 23, 2002. Court involvement in these situations produced maps with higher levels of electoral competition, even in the largely uncompetitive state of Idaho. Furthermore, the court is not the only institutional pathway to minimizing political influence. In Iowa, the Legislative Service Bureau, nonpartisan legislative support staff, drew districts that maintained relatively high levels of electoral competition. Arizona—a state not analyzed here due to their two-member districts—adopted a commission system in 2000 by initiative. However, analyses indicate that the plan adopted by the Arizona Independent Redistricting Commission actually reduced the number of competitive districts due to the greater priority of factors other than promoting competition in drafting the new plan.\(^{53}\) Voters in California and Ohio rejected commission proposals in 2005. Florida may have the opportunity to decide the same question in 2006. Voters in many states, particularly those with no initiative process, may find it difficult to bring about the adoption of non-judicial remedies to partisan gerrymandering. Efforts by the current crop of elected officials to entrench themselves in power through redistricting are often little known or understood. The key redistricting decisions often take place outside the public view. New York even exempts redistricting data compiled by its reapportionment task force from the state equivalent of the Freedom of Information Act. Judicial action may be the only remedy to partisan gerrymandering in some states. But the courts are not a panacea for removing political influence from redistricting. In Missouri, only four of six members of a panel of judges adopted a map in what many perceived to be a partisan vote.\(^{54}\) In New Mexico, a state court essentially adopted a state House map that had been passed by the Democratic legislature but vetoed by the Republican governor.\(^{55}\) The intrusion of politics into these court decisions resulted in maps with lower levels of electoral competition. Redistricting in South Carolina and Georgia further demonstrate the limits of what courts—or anyone—can do to encourage electoral competition in southern Republican states that must draw uncompetitive Democratic majority-minority districts to satisfy the Voting Rights Act. In South Carolina, a court produced new 2002 maps that increased the number of majority-minority districts by four, but also brightened prospects for Republicans in a chamber they already dominated.\(^{56}\) Court drawn plans in Georgia and South Carolina produced little electoral competition; nor did other plans enacted by state legislatures in Southern states covered by Section 5 of the Voting Rights Act. The court-drawn plan in Georgia was actually much less competitive that the partisan plan drawn by Democrats that was used in 2002. Georgia’s recent experience serves as a valuable reminder that partisan gerrymanders sometimes fail and that court-drawn plans do not always result in greater competition. However, these are exceptions to an overall pattern indicating that partisan gerrymandering more often has a dampening effect on competition. Judicial action can help alleviate, if not totally solve, the problem of partisan efforts to strangle the democratic process. Address reprint requests to: David Lublin Department of Government School of Public Affairs American University 4400 Massachusetts Ave., N.W. Washington, D.C. 20016 E-mail: firstname.lastname@example.org \(^{53}\) Michael P. McDonald “Drawing the Line on Competition,” *PS* 39: 1 (January 2006): 91–94. \(^{54}\) Bill Bell, “New House Maps Boost GOP Chances, Boundaries Could Tip Balance of Power in Legislature,” *St. Louis Post-Dispatch* (December 14, 2001): C1. \(^{55}\) Editorial, “Judge Redistricts House to Democrats’ Plan,” *Albuquerque Journal* (January 28, 2002): A8. \(^{56}\) Warren Wise and Schuyler Kropt, “Judges Unveil New Districts for S.C.,” *The Charleston Post and Courier* (March 21, 2002): 1A.
A Calculus for Orchestration of Web Services Rosario Pugliese\textsuperscript{a}, Francesco Tiezzi\textsuperscript{b,*} \textsuperscript{a}Università degli Studi di Firenze, Viale Morgagni, 65 - 50134 Firenze, Italy \textsuperscript{b}IMT Institute for Advanced Studies Lucca, Piazza S. Ponziano. 6 - 55100 Lucca, Italy Abstract Service-oriented computing, an emerging paradigm for distributed computing based on the use of services, is calling for the development of tools and techniques to build safe and trustworthy systems, and to analyse their behaviour. Therefore, many researchers have proposed to use process calculi, a cornerstone of current foundational research on specification and analysis of concurrent, reactive, and distributed systems. In this paper, we follow this approach and introduce C\textsuperscript{\circ}WS, a process calculus expressly designed for specifying and combining service-oriented applications, while modelling their dynamic behaviour. We show that C\textsuperscript{\circ}WS can model all the phases of the life cycle of service-oriented applications, such as publication, discovery, negotiation, orchestration, deployment, reconfiguration and execution. We illustrate the specification style that C\textsuperscript{\circ}WS supports by means of a large case study from the automotive domain and a number of more specific examples drawn from it. Keywords: Service-oriented computing, Formal methods, Process calculi 1. Introduction Recently, the increasing success of e-business, e-learning, e-government, and other similar emerging models, has led the World Wide Web, initially thought of as a system for human use, to evolve towards an architecture for \textit{Service-Oriented Computing} (SOC) supporting automated use. This emerging paradigm finds its origin in object-oriented and component-based software development, and aims at enabling developers to build networks of interoperable and collaborative applications, regardless of the platform where the applications run and of the programming language used to develop them, through the use of independent computational units, called \textit{services}. Services are loosely coupled reusable components, that are built with little or no knowledge about clients and other services involved in their operating environment. SOC systems thus deliver application functionalities as services to either end-user applications or other services. There are by now some successful and well-developed instantiations of the general SOC paradigm, like e.g. Web Services and Grid Computing, that exploit the pervasiveness of Internet \textsuperscript{*}This work has been partially sponsored by the EU project ASCENS (257414) and by MIUR (PRIN 2009 DISCO). \textsuperscript{*}Corresponding author \textit{Email addresses}: firstname.lastname@example.org (Rosario Pugliese), email@example.com (Francesco Tiezzi) \textit{URL}: http://www.dsi.unifi.it/~pugliese/ (Rosario Pugliese), http://www.imtlucca.it/francesco.tiezzi (Francesco Tiezzi) Preprint submitted to Journal of Applied Logic October 11, 2011 and related standards. However, current software engineering technologies for SOC remain at the descriptive level and lack rigorous formal foundations. In the design of SOC systems we are still experiencing a gap between practice (programming) and theory (formal methods and analysis techniques). The challenges come from the necessity of dealing at once with such issues as asynchronous interactions, concurrent activities, workflow coordination, business transactions, failures, resource usage, and security, in a setting where demands and guarantees can be very different for the many different components. Many researchers have hence put forward the idea of using process calculi, a cornerstone of current foundational research on specification and analysis of concurrent, reactive and distributed systems through mathematical — mainly algebraic and logical — tools. Due to their algebraic nature, process calculi provide intuitive and concise notations, and convey in a distilled form the compositional programming style of SOC. Services are built in a compositional way by using the operators provided by the calculus and are syntactically finite, even when the corresponding semantic model is not. Process calculi enjoy a rich repertoire of elegant meta-theories, proof techniques and analytical tools. SOC could benefit from this large body of knowledge and from the experience gained in the specification and analysis of concurrent, reactive and distributed systems during the last few decades. In fact, it has been already argued that type systems, modal and temporal logics, and observational equivalences provide adequate tools to address topics relevant to SOC (see e.g. [1, 2]). This ‘proof technology’ can eventually pave the way for the development of automatic property validation tools. Therefore, process calculi might play a central role in laying rigorous methodological foundations for specification and validation of SOC applications. Many process calculi for SOC have hence been proposed either by enriching well-established process calculi with specific constructs (e.g. the variants of $\pi$-calculus with transactions [3, 4, 5] and of CSP with compensation [6]) or by designing completely new formalisms (e.g. [7, 8, 9, 10, 11, 12, 13, 14]). The work presented in this paper falls within the above line of research, since it introduces a process calculus, called C$\ominus$WS (Calculus for Orchestration of Web Services), that aims at capturing the basic aspects of SOC systems and supporting their analysis. In designing C$\ominus$WS, the main principles underlying the OASIS standard for orchestration of web services WS-BPEL [15] have been considered as first-class aspects. This permits a direct representation of the mechanisms underlying the SOC paradigm and is, then, an important step towards their investigation and comprehension. In fact, C$\ominus$WS supports service instances with shared states, allows a process to play more than one partner role, permits programming stateful sessions by correlating different service interactions, and enables management of long-running transactions. However, C$\ominus$WS intends to be a foundational model not specifically tight to web services’ current technology. Thus, some WS-BPEL constructs, such as flow graphs and fault and compensation handlers, do not have a precise counterpart in C$\ominus$WS, rather they are expressed in terms of more primitive operators. Of course, C$\ominus$WS has also taken advantage of previous work on process calculi. Indeed, it combines in an original way constructs and features borrowed from well-known process calculi, e.g. non-binding input activities, asynchronous communication, polyadic synchronization, pattern matching, protection, delimited receiving and killing activities, while however resulting different from any of them. We illustrate syntax, operational semantics and pragmatics of C$\ominus$WS by means of a large case study from the automotive domain and a number of more specific examples drawn from it. We also present a C$\ominus$WS’s dialect that smoothly incorporates constraints and operations on them, thus permitting to model Quality of Service requirement specifications and Service Level Agreement achievements. This dialect is obtained by specialising a few syntactic objects (e.g., the set of expressions that can occur within terms of the calculus) and semantic mechanisms of C\textsuperscript{\textregistered}WS’s definition. By means of our case study, we show that the formalism thus obtained can model all the phases of the life cycle of service-oriented applications, such as publication, discovery, negotiation, orchestration, deployment, reconfiguration and execution. This, on the one hand, provides evidence of the quality of the C\textsuperscript{\textregistered}WS’s design, on the other hand, may enable the application of a wide range of techniques for the analysis of services (see, e.g., [9, 16, 17, 18, 19, 20, 21]). **Summary of the rest of the paper.** In Section 2, we provide an overview of SOC and an informal presentation of the case study that will be used throughout the paper for illustration purposes. In Section 3, to gradually introduce C\textsuperscript{\textregistered}WS’s technicalities and distinctive features, we present its syntax and operational semantics in four steps: for each of the four calculi we show many simple clarifying examples. In Section 4, we present the formal specification of the case study, informally described in Section 2, in the calculus corresponding to the untimed fragment of C\textsuperscript{\textregistered}WS and provide a glimpse of the properties that can be verified over this specification. Then, in Section 5, we introduce the C\textsuperscript{\textregistered}WS’s dialect that permits modelling dynamic service publication, discovery and negotiation; we further elaborate the case study for illustrating both the additional aspects and the ones related to time. In Section 6, we review some strictly related work. Finally, in Section 7, we conclude with some final remarks and touch upon directions for future work. This work is an extended and revisited version of our former developments introduced in [8, 22, 23]. The novel contribution is a comprehensive, uniform, more detailed and neater presentation of the process calculus C\textsuperscript{\textregistered}WS and of how it can be effectively used to model the basic aspects of SOC systems. More specifically, Sections 3.1, 3.2 and 3.3 are a revised version of [8], although here we adopt a more detailed step-by-step presentation in order to gradually introduce the C\textsuperscript{\textregistered}WS’s features and discuss, for each of them, the underlying motivations. Moreover, the newer version uses many notations, conventions, definitions and examples that make the presentation of the operational semantics of the calculus simpler and clearer (in the preliminary version, e.g., the definitions of the predicates for checking the presence of receive conflicts and enabled kill activities resort to the notion of ‘active context’). Section 3.4 is drawn from [22], while the dialect of C\textsuperscript{\textregistered}WS presented in Section 5.1 comes from [23]; they have been properly integrated in this uniform presentation. All C\textsuperscript{\textregistered}WS’s features are illustrated by means of a large case study from the automotive domain and a number of more specific examples drawn from it. To sum up, this paper aims at providing the interested reader with a novel presentation of the calculus, where both design motivations and technical details about primitives and mechanisms are taken into account. From a more general perspective, the paper illustrates how SOC systems can be modelled by using an approach based on process calculi. **2. Background notions** In this introductory section, we set the scene of the whole paper by providing the background notions from Service-Oriented Computing that we aim at modelling and by informally presenting a case study used throughout the paper for describing how such notions are rendered in C\textsuperscript{\textregistered}WS. **2.1. Service-Oriented Computing** Service-Oriented Computing (SOC) is emerging as an evolutionary paradigm for distributed and e-business computing that finds its origin in object-oriented and component-based software development. Early examples of technologies that are at least partly service-oriented are CORBA, DCOM, J2EE or .NET. A more recent successful instantiation of the SOC paradigm are *web services*. These are sets of operations (i.e. functionalities) that can be published, located and invoked through the Web via XML messages complying with given standard formats. To support the web service approach, several new languages and technologies have been designed and many international companies, like IBM, Microsoft and Oracle, have invested a lot of efforts. There is a common way to view the web service architecture. It focuses on three major roles: - **Service provider**: The software entity that implements a service specification and makes it available on the Internet. Providers publish machine-readable service descriptions on registries to enable automated discovery and invocation. - **Service requestor** (or **client**): The software entity that invokes a service provider. A service requestor can be an end-user application or another service. - **Service broker**: A specific kind of service provider that allows automated publication and discovery of services by relying on a registry. Figure 1 shows the three service roles and how they interact with each other. This architecture, and the context of services use, imposes a series of constraints. Here are some key characteristics for effective use of services (see, e.g., [24]): - **Coarse-grain**: Operations on services are frequently implemented to encompass more functionalities and operate on larger data sets, compared to those of fine-grained components as well as object-oriented interfaces. - **Interface-based design**: Services implement separately defined interfaces. The set of interfaces implemented by a service is called *service description*. In addition to the functions that the service performs, service descriptions should also include non-functional properties (e.g. response time, availability, reliability, security, performance) that jointly represent the *quality of the service* (QoS). In this case, they are also called *service contracts*. - **Discoverability**: Services need to be found at both design time and run time by service requestors. Moreover, since services are often developed and run by different organizations, a key issue of the discovery process is to define a flexible *negotiation* mechanism. that allows two or more parties to reach a joint agreement about cost and quality of a service, prior to service execution. The outcome of the negotiation phase is a *Service Level Agreement* (SLA), i.e. a contract among the involved parties that sets out both type and bounds on various performance metrics of the service to be provided. - **Loosely coupling**: Services are connected to other services and clients using standard, dependency-reducing, decoupled message-based methods, as XML document exchanges. - **Asynchrony**: In general, services use an asynchronous message passing approach, but this is not necessarily required. Some of these criteria, such as interface-based design and discoverability, are also used in component-based development; however, it is the sum of these attributes that differentiates a service-based application from a component-based one. It is beneficial, for example, to make web services asynchronous to reduce the time a requestor spends waiting for responses. In fact, by making a service call asynchronous, with a separate return message, the requestor will be able to continue execution while the provider has a chance to respond. This is not to say that synchronous service behavior is wrong, just that experience has demonstrated that asynchronous service behavior is desirable, especially where communication costs are high or network latency is unpredictable, and provides the developer with a simpler scalability model [24]. To support the web service approach, many new languages, most of which based on XML, have been designed. The technologies that form the foundations of web services are SOAP, WSDL, and UDDI. Simple Object Access Protocol (SOAP, [25]) is responsible for encoding messages in a common XML format so that they can be understood at either end by all communicating services. Currently, SOAP is the principal XML-based standard for exchanging information between applications within a distributed environment. Web Service Description Language (WSDL, [26]) is responsible for describing the public interface of a specific web service. Through a WSDL description, that is an XML document, a client application can determine the location of the remote web service, the functions it implements, as well as how to access and use each function. After parsing a WSDL description, a client application can appropriately format a SOAP request and dispatch it to the location of the web service. In this setting, Universal Description, Discovery, and Integration (UDDI [27]) is responsible for centralizing services into a common registry and providing easy *publish* and *find* functionalities. The relationships between SOAP, WSDL, and UDDI are depicted in Figure 1. To move beyond the basic framework *describe-publish-interact* and to better appreciate the real value of web services, mechanisms for service composition are required. Several specifications have been proposed in these areas, among which we would like to mention the composition language Web Services Business Process Execution Language (WS-BPEL, [15]), the OASIS standard for orchestration of web services. In the web services literature [28], the term *orchestration* is used to indicate composition of web services and, in particular, it describes how a collection of web services can interact with each other at the message level, including the business logic and the execution order of the interactions. These interactions may span applications and/or organizations, and result in a long-lived, transactional, multi-step process model. A service orchestration combines services following a certain composition pattern to achieve a business goal or provide new service functions in general. For example, handling a purchase order is the summation of processes that calculate the final price for the order, select a shipper, and schedule the production and shipment for the order. It is worth emphasizing that service orchestrations may themselves become services, making composition a recursive operation. In the example above, handling a purchase order may become a service that is instantiated to serve each received purchase order separately from other similar requests. This is necessary because a client might be carrying on many simultaneous purchase order interactions with the same service. Service descriptions are thus used as templates for creating service instances that deliver application functionality to either end-user applications or other instances. The technology supporting tightly coupled communication frameworks typically establishes an active connection between interacting entities that persists for the duration of a given business activity (or even longer). Because the connection remains active, context is inherently present, and correlation between individual transmissions of data is intrinsically managed by the technology protocol itself. Instead, the loosely coupled nature of SOC implies that a same service should be identifiable by means of different logic names and the connection between communicating instances cannot be assumed to persist for the duration of a whole business activity. Therefore, there is no intrinsic mechanism for associating messages exchanged under a common context or as part of a common activity. Even the execution of a simple request-response message exchange pattern provides no built-in means of automatically associating the response message with the original request. It is up to each single message to provide a form of context thus enabling services to associate the message with others. This is achieved by embedding values in the message which, once located, can be used to correlate the message with others logically forming the same stateful interaction ‘session’ (also called ‘conversation’). A key observation is that *message correlation* is an essential part of messaging within SOC as it enables the persistence of activities’ context and state across multiple message exchanges while preserving service statelessness and autonomy, and the loosely coupled nature of service-oriented systems. A further key feature of languages for service composition is the recovery mechanism for long-running business transactions. In SOC environments, the ordinary assumptions about primitive operations in traditional databases (Atomicity, Consistency, Isolation and Durability, ACID) are not applicable in general because local locks and isolation cannot be maintained for long periods (see [15], Section 12.3). Therefore, many languages for service composition rely on the concept of *compensation*, i.e., activities that attempt to reverse the effects of a previous activity that was carried out as part of a larger unit of work that is being abandoned. All aspects of SOC we have just described are at the basis of the C\textsuperscript{○}WS’s design. This because we believe that having them as first-class aspects would permit a more direct representation and a deeper comprehension of the mechanisms underlying the SOC paradigm. This is witnessed by the several examples described in the paper. ### 2.2. An automotive case study We introduce here a significant case study [29] in the area of automotive systems defined within the EU project S\textsuperscript{2}ENSORIA [30]. We consider a scenario where vehicles are equipped with a multitude of sensors and actuators that provide the driver with services that assist in conducting the vehicle more safely. Driver assistance systems become automatically operative when the vehicle context renders it necessary. Due to the advances in mobile technology, automotive software installed in the vehicles can contact relevant specific services to deal with driver’s necessities. Specifically, let us consider the case in which, while a driver is on the road with her/his car, the vehicle’s *sensors monitor* reports a severe failure, which results in the car being no longer driveable. The car’s *discovery* system then identifies garages, car rentals and towing truck services in the car’s vicinity. At this point, the car’s *reasoner* system chooses a set of adequate services taking into account personalised policies and preferences of the driver, e.g., balancing cost and delay, and tries to order them. To be authorised to order services, the car’s system has to deposit on behalf of the car owner a security payment, which will be given back if ordering the services fails. Other components of the in-vehicle service platform involved in this assistance activity are a *GPS* system, providing the car’s current location, and an *orchestrator*, coordinating all the described services. An UML-like activity diagram of the orchestration of services using UML4SOA, an UML Profile for service-oriented systems [31], is shown in Figure 2. The orchestrator is triggered by a signal from the sensors monitor (concerning, e.g., an engine failure) and consequently contacts the other components to locate and compose the various services to reach its goal. The process starts with a request from the orchestrator to the *bank* to charge the car owner’s credit card with the security deposit payment. This is modelled by the UML action *CardCharge* for charging the credit card whose number is provided as an output parameter of the action call. In parallel to the interaction with the bank, the orchestrator requests the current location of the car from the car’s internal GPS system. The current location is modelled as an input to the *RequestLocation* action and subsequently used by the *FindServices* interaction which retrieves a list of services. If no service can be found, an action to compensate the credit card charge will be launched. For the selection of services, the orchestrator synchronises with the reasoner service to obtain the most appropriate services. Service ordering is modelled by the UML actions *OrderGarage*, *OrderTowTruck* and *RentCar*. When the orchestrator makes an appointment with the garage, the diagnostic data are automatically transferred to the garage, which could then be able, e.g., to identify the spare parts needed to perform the repair. Then, the orchestrator makes an appointment with the towing service, providing the GPS data of the stranded vehicle and of the garage, to tow the vehicle to the garage. Concurrently, the orchestrator makes an appointment with the rental service, by indicating the location (i.e. the GPS coordinates either of the stranded vehicle or of the garage) where the car will be handed over to the driver. The workflow described in Figure 2 models the overall behaviour of the system. Besides interactions among services, it also includes activities using concepts developed for long running business transactions (in e.g. [32, 15]). These activities entail fault and compensation handling, kind of specific activities attempting to reverse the effects of previously committed activities, that are an important aspect of SOC applications. According to UML4SOA Profile, the installation of a compensation handler is modelled by an edge stereotyped <<compensationEdge>>, and its activation by an activity stereotyped <<compensate>>. Since each compensation handler is associated to a single UML activity, we omit drawing the enclosing ‘scope’ construct. Moreover, we use dashed boxes to represent compensation handlers. Specifically, in the considered scenario: - the security deposit payment charged to the car owner’s credit card must be revoked if either the discovery phase does not succeed or ordering the services fails, i.e. both garage/tow truck and car rental services reject the requests; - if ordering a tow truck fails, the garage appointment has to be cancelled; - if ordering a garage fails or a garage order cancellation is requested, the rental car delivery has to be redirected to the stranded car’s actual location; - instead, if ordering the car rental fails, it should not affect the tow truck and garage orders. These requirements motivate the fact that ordering garage/tow truck and renting a car are modelled as activities running in parallel. 3. The language COWS To gradually introduce the technicalities and distinctive features of COWS, we present its syntax and operational semantics in four steps. More specifically, in Section 3.1 we consider $\mu\text{COWS}^m$ ($\mu\text{COWS minus priority}$), the fragment of C$\ominus$WS without priority, primitives dealing with termination and timed activities. It retains all the other C$\ominus$WS’s features, like e.g. global scope and pattern matching. In Section 3.2 we move on $\mu\text{COWS}$ (micro COWS), the calculus obtained by enriching $\mu\text{COWS}^m$ with priority. In Section 3.3 we consider COWS, which extends $\mu\text{COWS}$ with primitives dealing with termination. Finally, in Section 3.4 we study the full calculus, C$\ominus$WS, which incorporates timed orchestration constructs, thus permitting to express, e.g., choices among alternative activities constrained by expiration times. For each of the four calculi we show some accurate clarifying examples. 3.1. $\mu\text{COWS}^m$: the priority-, protection-, kill- and time-free fragment of C$\ominus$WS The fragment of C$\ominus$WS introduced in this section, namely $\mu\text{COWS}^m$, dispenses with priority, primitives dealing with termination, and timed activities. 3.1.1. Syntax The syntax of $\mu\text{COWS}^m$ is presented in Table 1. We use two countable disjoint sets: the set of values (ranged over by $v$, $v'$, ...) and the set of ‘write once’ variables (ranged over by $x$, $y$, ...). The set of values is left unspecified; however, we assume that it includes the set of names (ranged over by $n$, $m$, $p$, $o$, ...) mainly used to represent partners and operations. We also use a set of expressions (ranged over by $e$), whose exact syntax is deliberately omitted; we just assume that expressions contain, at least, values and variables. Services are structured activities built from basic activities, i.e. the empty activity $\mathbf{0}$, the invoke activity $\_ \cdot !\_.$ and the receive activity $\_ \cdot ?\_.$, by means of prefixing $\_ \cdot \ldots$, choice $\_ + \ldots$, parallel composition $\_ | \ldots$, delimitation $\[ \ldots \]$ and replication $\ast \ldots$. The empty activity does nothing. Invoke and receive are the communication activities, which permit invoking an operation offered by a service and waiting for an invocation to arrive, respectively. Prefixing permits starting the execution of some service activities after the execution of a given basic activity is concluded. Choice permits selecting one between two alternative activities for execution, while parallel composition permits interleaving executions and enables communication between parallel services. Delimitation is used, according to its first argument, for two different purposes: to regulate the range of application of substitutions and to generate fresh names. Finally, replication permits implementing recursive behaviours and persistent services. We adopt the following conventions about the operators precedence: monadic operators bind more tightly than parallel composition, and prefixing more tightly than choice. In the sequel, $w$ ranges over values and variables and $u$ ranges over names and variables. Notation $\bar{x}$ stands for tuples, e.g. $\bar{x}$ means $(x_1, \ldots, x_n)$ (with $n \geq 0$) where variables in the same tuple are pairwise distinct. We write $a, \bar{b}$ to denote the tuple obtained by concatenating the element $a$ to the tuple $\bar{b}$. All notations shall extend to tuples component-wise. $n$ ranges over communication endpoints that do not contain variables (e.g. $p \cdot o$), while $u$ ranges over communication endpoints that may contain variables (e.g. $u \cdot u'$). Sometimes, we will use notation $n$ and $u$ for the tuples $\langle p, o \rangle$ and $\langle u, u' \rangle$, respectively, and rely on the context to resolve any ambiguity. When convenient, we shall regard a tuple (hence, also an endpoint) simply as a set, writing e.g. $x \in \bar{y}$ to mean that $x$ is an element of $\bar{y}$. We will omit trailing occurrences of $\mathbf{0}$, writing e.g. $p \cdot o? \bar{w}$ instead of $p \cdot o? \bar{w}.\mathbf{0}$, and write $\{ [u_1, \ldots, u_n] \} s$ in place of $[u_1] \ldots [u_n] s$. We will write $I \triangleq s$ to assign a name $I$ to the term $s$. The only binding construct is delimitation: $[u] s$ binds $u$ in the scope $s$. In fact, to enable concurrent threads within each service instance to share (part of) the state, receive activities in $\mu$COWS$^m$ bind neither names nor variables. This is different from most process calculi and somewhat similar to update [33] and fusion [34] calculi. In $\mu$COWS$^m$, however, inter-service communication give rise to substitutions of variables with values (alike [33]), rather than to fusions of names (as in [34]). The range of application of the substitutions generated by a communication is regulated by the delimitation operator, that additionally permits to generate fresh names (as the restriction operator of $\pi$-calculus). Thus, the occurrence of a name/variable is free if it is not under the scope of a delimitation for it. Bound and free names are also called private and public names, respectively. We denote by fu(t) the set of free names/variables that occur free in t. Two terms are $\alpha$-equivalent if one can be obtained from the other by consistently renaming bound names/variables. As usual, we identify terms up to $\alpha$-equivalence. Partner names and operation names can be combined to designate endpoints, written $p \cdot o$. In fact, alike channels in [35], an endpoint is not atomic but results from the composition of a partner name $p$ and of an operation name $o$, which can also be interpreted as a specific implementation of $o$ provided by $p$. This results in a very flexible naming mechanism that allows a service to be identified by means of different logic names (i.e. to play more than one partner role as in WS-BPEL). For example, the following service $$p_{slow} \cdot o?w.s_{slow} + p_{fast} \cdot o?w.s_{fast}$$ accepts requests for the same operation $o$ through different partners with distinct access modalities: process $s_{slow}$ implements a slower service provided when the request is processed through the partner $p_{slow}$, while $s_{fast}$ implements a faster service provided when the request arrives through $p_{fast}$. Additionally, the names composing an endpoint can be dealt with separately, as in a request-response interaction, where usually the service provider knows the name of the response operation, but not the partner name of the service it has to reply to. For example, the ping service $p \cdot o_{req}?x.x \cdot o_{res}!("I\ live")$ will know at run-time the partner name for the reply activity. This mechanisms is also sufficiently expressive to support implementation of explicit locations: a located service can be represented by using a same partner for all its receiving endpoints. Partner and operation names can be exchanged in communication, thus enabling many different interaction patterns among service instances. However, dynamically received names can only be used for service invocation (as in localised $\pi$-calculus [36]). Indeed, endpoints of receive activities are identified statically because their syntax only allows using names and not variables. **Remark 3.1 (Localised receive activities).** As in localised $\pi$-calculus and differently from the standard $\pi$-calculus, C$\ominus$WS disallows passing of ‘input capability’, i.e. the ability of services to receive a name and subsequently accept inputs along an endpoint containing such name. This choice is motivated, on the one hand, by the fact that the design of C$\ominus$WS has been influenced by the current (web) service technologies where endpoints of receive activities are statically determined\(^1\) (recall that service endpoints are not $\pi$-calculus channels) and, on the other hand, by the will to support an easier implementation of the calculus. However, the former is the major motivation. In fact, implementation problems due to input capability could be solved by relying on the theory of linear forwarders [37] as in P$\downarrow$Duce [38]. To model asynchronous communication, invoke activities cannot be used as prefixes and choice can only be guarded by receive activities (as in asynchronous $\pi$-calculus [39]). Indeed, in service-oriented systems, communication paradigms are usually asynchronous (as we pointed out in Section 2.1), in the sense that there may be an arbitrary delay between the sending and the receiving of a message, the ordering in which messages are received may differ from that in which they were sent, and a sender cannot determine if and when a sent message will be received. ### 3.1.2. Operational semantics The operational semantics of $\mu$COWS\textsuperscript{m} is defined only for closed services, i.e. services without free variables. By following an approach commonly used for process calculi, the semantics is formally given in terms of a structural congruence and of a labelled transition relation. The structural congruence, written $\equiv$, identifies syntactically different services that intuitively represent the same service. It is defined as the least congruence relation induced by the equational laws shown in Table 2. All the laws are straightforward. In particular, commutativity of consecutive delimitations implies that the order among the $u_i$ in $[(u_1, \ldots, u_n)]s$ is irrelevant, thus in the sequel we may use the simpler notation $[u_1, \ldots, u_n]s$. The last law permits to extend the scope of names (as in the $\pi$-calculus) and variables, thus enabling possible communication (see the examples “Communication” and “Communication of private names” in Section 3.1.3). The definition of the labelled transition relation is parameterized by two auxiliary functions; we present here their basic definitions and show in Section 5.1 how they can be specialised to obtain a dialect of the language. Firstly, we use the function $[].$ for evaluating closed expressions (i.e. expressions without variables): it takes a closed expression and returns a value. It is not explicitly defined since the exact syntax of expressions is deliberately not specified. Secondly, we use the partial function $M(.,.)$ for performing pattern-matching on semi-structured data and, --- \(^1\)Indeed, if a WS-BPEL process receives an operation name, it cannot make this operation available to other services and then receive messages through it. In fact, this would require the process to be able to modify at runtime its WSDL interface to add the definition of the new operation, but WS-BPEL provides no construct allowing this dynamic change. \[ \begin{align*} M(x, v) &= \{x \mapsto v\} \\ M(v, v) &= \emptyset \\ M(\langle\rangle, \langle\rangle) &= \emptyset \\ M(w_1, v_1) &= \sigma_1 \\ M(\tilde{w}_2, v_2) &= \sigma_2 \\ M((w_1, \tilde{w}_2), (v_1, v_2)) &= \sigma_1 \uplus \sigma_2 \end{align*} \] Table 3: Matching rules \[ \begin{align*} \llbracket \bar{e} \rrbracket = \bar{v} & \quad \text{(inv)} \\ n! \bar{e} & \xrightarrow{n \triangleleft \bar{v}} 0 \\ n? \tilde{w}. s & \xrightarrow{n \triangleright \tilde{w}} s \quad \text{(rec)} \\ g & \xrightarrow{\alpha} s \\ g + g' & \xrightarrow{\alpha} s \\ s_1 & \xrightarrow{n \triangleright \tilde{w}} s'_1 \\ s_2 & \xrightarrow{n \triangleleft \bar{v}} s'_2 \\ M(\tilde{w}, \bar{v}) &= \sigma \quad \text{(com)} \\ s_1 | s_2 & \xrightarrow{\sigma} s'_1 | s'_2 \\ s & \xrightarrow{\sigma \uplus \{x \mapsto v\}} s' \quad \text{(del}_{\text{com}}) \\ [x] s & \xrightarrow{\sigma} s'. \{x \mapsto v\} \\ s & \xrightarrow{\alpha} s' \quad u \notin u(\alpha) \quad \text{(del)} \\ [u] s & \xrightarrow{\alpha} [u] s' \\ s & \xrightarrow{\alpha} s' \quad \text{(str)} \end{align*} \] Table 4: $\mu$COWS$^m$ operational semantics thus, determining if a receive and an invoke over the same endpoint can synchronise. The rules defining $M(.,.)$ are shown in Table 3. They state that two tuples match if they have the same number of fields and corresponding fields have matching values/variables. Variables match any value, and two values match only if they are identical. When tuples $\tilde{w}$ and $\bar{v}$ do match, $M(\tilde{w}, \bar{v})$ returns a substitution for the variables in $\tilde{w}$; otherwise, it is undefined. Substitutions (ranged over by $\sigma$) are functions mapping variables to values and are written as collections of pairs of the form $x \mapsto v$. Application of substitution $\sigma$ to $s$, written $s \cdot \sigma$, has the effect of replacing every free occurrence of $x$ in $s$ with $v$, for each $x \mapsto v \in \sigma$, by possibly using $\alpha$-conversion for avoiding $v$ to be captured by name delimitations within $s$. We use $\emptyset$ to denote the empty substitution, $|\sigma|$ to denote the number of pairs in $\sigma$, and $\sigma_1 \uplus \sigma_2$ to denote the union of $\sigma_1$ and $\sigma_2$ when they have disjoint domains. The labelled transition relation $\xrightarrow{\alpha}$ is the least relation over services induced by the rules in Table 4, where label $\alpha$ is generated by the following grammar: \[ \alpha ::= n \triangleleft \bar{v} \mid n \triangleright \tilde{w} \mid \sigma \] The meaning of labels is as follows: $n \triangleleft \bar{v}$ and $n \triangleright \tilde{w}$ denote execution of invoke and receive activities over the endpoint $n$ with arguments $\bar{v}$ and $\tilde{w}$, respectively; $\sigma$ denotes execution of a communication with generated substitution $\sigma$ to be still applied. The empty substitution $\emptyset$ denotes a computational step corresponding to taking place of communication without pending substitutions. In the sequel, we will use $u(\alpha)$ to denote the set of names and variables occurring in $\alpha$, where $u(x \mapsto v) = \{x\} \cup fu(v)$ and $u(\sigma_1 \uplus \sigma_2) = u(\sigma_1) \cup u(\sigma_2)$. Let us now comment on the operational rules. A service invocation can proceed only if the expressions in the argument can be evaluated (rule (inv)). This means, for example, that if it contains a variable $x$ (in its endpoint or argument) it is stuck until $x$ is not replaced by a value because of execution of a receive assigning a value to $x$. A receive activity offers an invocable operation along a given partner name (rule (rec)), and execution of a receive permits to take a decision between alternative behaviours (rule (choice)). Communication can take place when two parallel services perform matching receive and invoke activities (rule (com)). Communication generates a substitution that is recorded in the transition label (for subsequent application), rather than a silent transition as in most process calculi. Execution of parallel services is interleaved (rule \textit{par}). When the delimitation of a variable $x$ argument of a receive involved in a communication is encountered, i.e. the whole scope of the variable is determined, the delimitation is removed and the substitution for $x$ is applied to the term (rule \textit{del\_cont}). Variable $x$ disappears from the term and cannot be reassigned a value (for this reason we say that $\mu$COWS''s variables are ‘write once’). Notably, since in closed services all variables are delimited, the taking place of a communication within such kind of services always corresponds to a computational step and leads to services that are closed too. $[u] s$ behaves like $s$ (rule \textit{del}), except when the transition label $\alpha$ contains $u$. Rule \textit{str} is standard and states that structurally congruent services have the same transitions. 3.1.3. Examples We report here a few examples aimed at clarifying the peculiarities of $\mu$COWS''. For the sake of presentation, the examples focus on a part of the automotive case study described in Section 2.2 that involves the interactions with a service of the car owner’s bank. This service allows its clients to charge a credit card for a specified amount by sending charge requests via the endpoint $p_{bank} \cdot o_{charge}$. A client, besides his credit card number, the amount to be charged and the timestamp (i.e. date and time) of the transaction, is required to provide the partner name that he will use to receive a response. \textit{Communication}. Communication can exploit scope extension (last law of Table 2) to allow receive and invoke activities to interact. In fact, they can synchronise only if both are in the scope of the delimitations that bind the variables argument of the receive. Thus, we must possibly extend the scopes of some variables, as in the following example, where a client with partner name $p_c$ invokes the bank service for charging his credit card 1234 with 100 euros at time $t$: $$p_{bank} \cdot o_{charge}^1(p_c, 1234, 100, t)$$ $$\mid [x_{cust}, x_{cc}, x_{amount}, x_{ts}] (p_{bank} \cdot o_{charge}^2(x_{cust}, x_{cc}, x_{amount}, x_{ts}). s \mid s')$$ $$\equiv$$ $$[x_{cust}, x_{cc}, x_{amount}, x_{ts}] (p_{bank} \cdot o_{charge}^1(p_c, 1234, 100, t)$$ $$\mid p_{bank} \cdot o_{charge}^2(x_{cust}, x_{cc}, x_{amount}, x_{ts}). s \mid s')$$ $$\rightarrow_0$$ $$(s \mid s') \cdot \{x_{cust} \mapsto p_c, x_{cc} \mapsto 1234, x_{amount} \mapsto 100, x_{ts} \mapsto t\}$$ Notice that, as shown by the inference of the above transition reported in Table 5, the substitution $\{x_{cust} \mapsto p_c, x_{cc} \mapsto 1234, x_{amount} \mapsto 100, x_{ts} \mapsto t\}$ is applied to all terms delimited by $[x_{cust}, x_{cc}, x_{amount}, x_{ts}]$, not only to the continuation $s$ of the service performing the receive. This is different from most process calculi and accounts for the global scope of variables. This very feature permits, e.g., to easily model the \textit{delayed input} of fusion calculus [34], which is instead difficult to express in $\pi$-calculus. \textit{Communication of private names}. Communication of private names is standard and exploits scope extension as in $\pi$-calculus. To enable communication of private names, besides their scopes, we must possibly extend the scopes of some variables. Consider to modify the previous example by restricting the scope of the partner name $p_c$ to the invoke activity, with $p_c$ fresh in $s$ and $s'$. Now, the communication can take place as follow: \[ p_{bank} \cdot o_{charge}^!(p_c, 1234, 100, t) \xrightarrow{p_{bank} \cdot o_{charge} \triangleleft(p_c, 1234, 100, t)} 0 \quad (inv) \] \[ p_{bank} \cdot o_{charge}^?(x_{cust}, x_{cc}, x_{amount}, x_{ts}) . s \xrightarrow{p_{bank} \cdot o_{charge} \triangleright(x_{cust}, x_{cc}, x_{amount}, x_{ts})} s \quad (rec) \] \[ M((x_{cust}, x_{cc}, x_{amount}, x_{ts}), \langle p_c, 1234, 100, t \rangle) = \\ \{x_{cust} \mapsto p_c, x_{cc} \mapsto 1234, x_{amount} \mapsto 100, x_{ts} \mapsto t\} \] \[ p_{bank} \cdot o_{charge}^!(p_c, 1234, 100, t) \mid p_{bank} \cdot o_{charge}^?(x_{cust}, x_{cc}, x_{amount}, x_{ts}) . s \\ \xrightarrow{(x_{cust} \mapsto p_c, x_{cc} \mapsto 1234, x_{amount} \mapsto 100, x_{ts} \mapsto t)} s \quad (com) \] \[ p_{bank} \cdot o_{charge}^!(p_c, 1234, 100, t) \mid p_{bank} \cdot o_{charge}^?(x_{cust}, x_{cc}, x_{amount}, x_{ts}) . s \\ \xrightarrow{(x_{cust} \mapsto p_c, x_{cc} \mapsto 1234, x_{amount} \mapsto 100, x_{ts} \mapsto t)} s \mid s' \quad (par) \] \[ [x_{ts}] \{p_{bank} \cdot o_{charge}^!(p_c, 1234, 100, t) \\ \mid p_{bank} \cdot o_{charge}^?(x_{cust}, x_{cc}, x_{amount}, x_{ts}) . s \mid s'\} \\ \xrightarrow{(x_{cust} \mapsto p_c, x_{cc} \mapsto 1234, x_{amount} \mapsto 100)} (s \mid s') \cdot \{x_{ts} \mapsto t\} \quad (del_{com}) \] \[ [x_{amount}, x_{ts}] \{p_{bank} \cdot o_{charge}^!(p_c, 1234, 100, t) \\ \mid p_{bank} \cdot o_{charge}^?(x_{cust}, x_{cc}, x_{amount}, x_{ts}) . s \mid s'\} \\ \xrightarrow{(x_{cust} \mapsto p_c, x_{cc} \mapsto 1234)} (s \mid s') \cdot \{x_{amount} \mapsto 100, x_{ts} \mapsto t\} \quad (del_{com}) \] \[ [x_{cc}, x_{amount}, x_{ts}] \{p_{bank} \cdot o_{charge}^!(p_c, 1234, 100, t) \\ \mid p_{bank} \cdot o_{charge}^?(x_{cust}, x_{cc}, x_{amount}, x_{ts}) . s \mid s'\} \\ \xrightarrow{(x_{cust} \mapsto p_c)} (s \mid s') \cdot \{x_{cc} \mapsto 1234, x_{amount} \mapsto 100, x_{ts} \mapsto t\} \quad (del_{com}) \] \[ [x_{cust}, x_{cc}, x_{amount}, x_{ts}] \{p_{bank} \cdot o_{charge}^!(p_c, 1234, 100, t) \\ \mid p_{bank} \cdot o_{charge}^?(x_{cust}, x_{cc}, x_{amount}, x_{ts}) . s \mid s'\} \\ \xrightarrow{\emptyset} (s \mid s') \cdot \{x_{cust} \mapsto p_c, x_{cc} \mapsto 1234, x_{amount} \mapsto 100, x_{ts} \mapsto t\} \] Table 5: Inference of a computational step \[ [p_c] \{p_{bank} \cdot o_{charge}^!(p_c, 1234, 100, t) \\ \mid [x_{cust}, x_{cc}, x_{amount}, x_{ts}] \{p_{bank} \cdot o_{charge}^?(x_{cust}, x_{cc}, x_{amount}, x_{ts}) . s \mid s'\}\} \equiv \] \[ [p_c] \{p_{bank} \cdot o_{charge}^!(p_c, 1234, 100, t) \\ \mid [x_{cust}, x_{cc}, x_{amount}, x_{ts}] \{p_{bank} \cdot o_{charge}^?(x_{cust}, x_{cc}, x_{amount}, x_{ts}) . s \mid s'\}\} \equiv \] \[ [p_c, x_{cust}, x_{cc}, x_{amount}, x_{ts}] \{p_{bank} \cdot o_{charge}^!(p_c, 1234, 100, t) \\ \mid p_{bank} \cdot o_{charge}^?(x_{cust}, x_{cc}, x_{amount}, x_{ts}) . s \mid s'\} \xrightarrow{\emptyset} \] \[ [p_c] (s \mid s') \cdot \{x_{cust} \mapsto p_c, x_{cc} \mapsto 1234, x_{amount} \mapsto 100, x_{ts} \mapsto t\} \] **Persistent services.** The replication operator, which spawns in parallel as many copies of its argument term as necessary (law \( * s \equiv s |* s \) of Table 2), permits specifying **persistent** services, i.e. services capable of creating multiple instances to serve several requests simultaneously\(^2\). --- \(^2\)It is worth noticing that this is the standard behaviour of web services and, in particular, this is always the case for services resulting from WS-BPEL orchestrations [15, Section 5.5]. Thus, the bank service previously introduced can be made persistent by simply applying the replication operator to the $\mu$COWS$^m$ term as shown in the following example, where the (persistent) service definition runs in parallel with two clients: \[ (p_{bank} \cdot o_{charge}! \langle p_{cA}, 1234, 100, t_A \rangle \mid [x] p_{cA} \cdot o_{resp}? \langle x, t_A \rangle . s_A) \] \[ \mid (p_{bank} \cdot o_{charge}! \langle p_{cB}, 5678, 200, t_B \rangle \mid [y] p_{cB} \cdot o_{resp}? \langle y, t_B \rangle . s_B) \] \[ \ast [x_{cust}, x_{cc}, x_{amount}, x_{ts}] p_{bank} \cdot o_{charge}? \langle x_{cust}, x_{cc}, x_{amount}, x_{ts} \rangle . x_{cust} \cdot o_{resp}! \langle check(x_{cc}, x_{amount}), x_{ts} \rangle \] For each client request, the bank service creates an instance that replies to the corresponding client with a message, containing the result of the transaction and the timestamp, along either the endpoint $p_{cA} \cdot o_{resp}$ or $p_{cB} \cdot o_{resp}$. Here, for the sake of simplicity, the acceptance or rejection of a charge request is the result of the evaluation of a function $check(., .)$, which is left unspecified, that takes as arguments a credit card number and an amount. Symmetrically, the client $A$ (resp. $B$) invokes the bank service and, once a response along $p_{cA} \cdot o_{resp}$ (resp. $p_{cB} \cdot o_{resp}$) is received, proceeds as $s_A$ (resp. $s_B$). After a computational step, due to the interaction between the service definition and the client $A$, a new instance (highlighted by a gray background) runs in parallel with the other terms: \[ [x] p_{cA} \cdot o_{resp}? \langle x, t_A \rangle . s_A \] \[ \mid (p_{bank} \cdot o_{charge}! \langle p_{cB}, 5678, 200, t_B \rangle \mid [y] p_{cB} \cdot o_{resp}? \langle y, t_B \rangle . s_B) \] \[ \ast [x_{cust}, x_{cc}, x_{amount}, x_{ts}] p_{bank} \cdot o_{charge}? \langle x_{cust}, x_{cc}, x_{amount}, x_{ts} \rangle . x_{cust} \cdot o_{resp}! \langle check(x_{cc}, x_{amount}), x_{ts} \rangle \] \[ \mid p_{cA} \cdot o_{resp}! \langle check(1234, 100), t_A \rangle \] If, similarly, the client $B$ invokes the service, a second instance (highlighted by a dark gray background) is created: \[ [x] p_{cA} \cdot o_{resp}? \langle x, t_A \rangle . s_A \] \[ \mid [y] p_{cB} \cdot o_{resp}? \langle y, t_B \rangle . s_B \] \[ \ast [x_{cust}, x_{cc}, x_{amount}, x_{ts}] p_{bank} \cdot o_{charge}? \langle x_{cust}, x_{cc}, x_{amount}, x_{ts} \rangle . x_{cust} \cdot o_{resp}! \langle check(x_{cc}, x_{amount}), x_{ts} \rangle \] \[ \mid p_{cA} \cdot o_{resp}! \langle check(1234, 100), t_A \rangle \] \[ \mid p_{cB} \cdot o_{resp}! \langle check(5678, 200), t_B \rangle \] Now, the two instances can reply to the corresponding clients by invoking the operation $o_{resp}$ through the two different client partner names $p_{cA}$ and $p_{cB}$. Thus, assuming that the $check$ function returns $ok$ for the $A$’s request and $fail$ for the $B$’s one, after two computational steps the system becomes \[ s_A \cdot \{x \mapsto ok\} \] \[ \mid s_B \cdot \{y \mapsto fail\} \] \[ \ast [x_{cust}, x_{cc}, x_{amount}, x_{ts}] p_{bank} \cdot o_{charge}? \langle x_{cust}, x_{cc}, x_{amount}, x_{ts} \rangle . x_{cust} \cdot o_{resp}! \langle check(x_{cc}, x_{amount}), x_{ts} \rangle \] **Services’ execution modalities.** In $\mu$COWS$^m$, a service can be modelled by a term of the form $\ast [\bar{u}] s$, where tuple $\bar{u}$ contains all the free variables of $s$. The use of replication enables providing as many concurrent instances as needed, while that of delimitation permits modelling the state (by restricting the scope of variables). This means that the previous term corresponds to a service whose instances do not share a state. For instance, consider the following service definition: If we put it in parallel with the invocation \( p \cdot o!(v_1) \), the resulting system can evolve as follows: \[ * [x_1, \ldots, x_n] p \cdot o? \langle x_1 \rangle . s \mid p \cdot o! \langle v_1 \rangle \xrightarrow{\emptyset} * [x_2, \ldots, x_n] s \cdot \{x_1 \mapsto v_1\} \] Each time an invocation is processed, a new service instance with private variables \( x_2, \ldots, x_n \) is activated. For example, if we have two concurrent invocations, we get \[ * [x_1, \ldots, x_n] p \cdot o? \langle x_1 \rangle . s \mid p \cdot o! \langle v_1 \rangle \mid p \cdot o! \langle v_2 \rangle \xrightarrow{\emptyset} \xrightarrow{\emptyset} * [x_1, \ldots, x_n] p \cdot o? \langle x_1 \rangle . s \mid [x_2, \ldots, x_n] s \cdot \{x_1 \mapsto v_1\} \mid [x_2, \ldots, x_n] s \cdot \{x_1 \mapsto v_2\} \] The resulting system is composed of the service definition and of two different instances, each with its own state. To allow instances of a same service to share (part of) the state, we move the delimitations of the variables to be shared outside the scope of replication. Thus, if \( x_1, \ldots, x_k \) are shared and \( x_{k+1}, \ldots, x_n \) are not, the previous example can be modified as follows: \[ [x_1, \ldots, x_k] * [x_{k+1}, \ldots, x_n] p \cdot o? \langle x_1 \rangle . s \] After a parallel request \( p \cdot o! \langle v_1 \rangle \) has been processed, we have: \[ [x_2, \ldots, x_k] (* [x_{k+1}, \ldots, x_n] p \cdot o? \langle v_1 \rangle . s \cdot \{x_1 \mapsto v_1\} \mid [x_{k+1}, \ldots, x_n] s \cdot \{x_1 \mapsto v_1\}) \] In this case, since \( x_1 \) is shared both by the service definition and by its instances, new instances can be created only if the service definition receives requests along \( p \cdot o \) with the same value (i.e. \( v_1 \)) as the first invocation. In general, however, instantiation variables, such as \( x_1 \), are not shared, in order to allow service invocations with different arguments to trigger instance creation. To model this behaviour, we can simply leave instantiation variables within the scope of replication. Consider for example the term: \[ [x_2] * [x_1, x_3] p \cdot o? \langle x_1 \rangle . s \] If requests \( p \cdot o! \langle v_1 \rangle \) and \( p \cdot o! \langle v_2 \rangle \) are put in parallel, the resulting system can evolve as follows: \[ [x_2] * [x_1, x_3] p \cdot o? \langle x_1 \rangle . s \mid p \cdot o! \langle v_1 \rangle \mid p \cdot o! \langle v_2 \rangle \xrightarrow{\emptyset} \xrightarrow{\emptyset} [x_2] (* [x_1, x_3] p \cdot o? \langle x_1 \rangle . s \mid [x_3] s \cdot \{x_1 \mapsto v_1\} \mid [x_3] s \cdot \{x_1 \mapsto v_2\}) \] After two computational steps, two instances, each with a local state (i.e. the variable \( x_3 \)) and sharing variable \( x_2 \), are activated. **Message correlation.** The loosely coupled nature of SOC implies that the connection between communicating instances should not be assumed to persist for the duration of a whole business activity. Therefore, it is up to each single message to provide a form of context that enables services to associate the message with others. This is achieved by embedding values, called **correlation data**, in the content of the message itself. Pattern-matching is the mechanism for locating such data important to identify service instances for the delivering of messages. To explain how message correlation is realized in \( \mu \text{COWS}^m \), let us consider a variant of the bank service composed of two persistent subservices: *BankInterface*, that is publicly invocable by customers, and *CreditRating*, that instead is an ‘internal’ service that can only interact with BankInterface (indeed, all the operations used by CreditRating, i.e. \( o_{check}, o_{checkOk} \) and \( o_{checkFail} \) are restricted and this prevents them to be invoked from the outside). Specifically, Bank is the \( \mu \text{COWS}^m \) term \[ [o_{check}, o_{checkOk}, o_{checkFail}] (* \text{BankInterface} | * \text{CreditRating}) \] where BankInterface and CreditRating are defined as follows: \[ \text{BankInterface} \triangleq [x_{cust}, x_{cc}, x_{amount}, x_{ts}] p_{bank} \cdot o_{charge}?!(x_{cust}, x_{cc}, x_{amount}, x_{ts}). (p_{bank} \cdot o_{check}!(x_{ts}, x_{cc}, x_{amount}) | [x_{info}] (p_{bank} \cdot o_{checkFail}?(x_{ts}, x_{cc}, x_{info}), x_{cust} \cdot o_{resp}!(fail, x_{ts}, x_{info}) + p_{bank} \cdot o_{checkOk}?(x_{ts}, x_{cc}, x_{info}), x_{cust} \cdot o_{resp}!(ok, x_{ts}, x_{info})) \) \[ \text{CreditRating} \triangleq [x_{ts}, x_{cc}, x_{a}] p_{bank} \cdot o_{check}?!(x_{ts}, x_{cc}, x_{a}). [p, o] (p \cdot o?() | p \cdot o?(), p_{bank} \cdot o_{checkOk}!(x_{ts}, x_{cc}, \text{ratingInfo}(x_{cc}, x_{a})) + p \cdot o?(), p_{bank} \cdot o_{checkFail}!(x_{ts}, x_{cc}, \text{ratingInfo}(x_{cc}, x_{a}))) \] Whenever prompted by a client request, BankInterface creates an instance to serve that specific request and is immediately ready to concurrently serve other requests. Each instance forwards the request to CreditRating, by invoking the internal operation \( o_{check} \) through the invoke activity \( p_{bank} \cdot o_{check}!(x_{ts}, x_{cc}, x_{amount}) \), then waits for a reply on one of the other two internal operations \( o_{checkFail} \) and \( o_{checkOk} \), by exploiting the receive-guarded choice operator, and finally sends the reply back to the client by means of a final invoke activity using the partner name of the client stored in the variable \( x_{cust} \). Service CreditRating takes care of checking clients’ requests and decides if they can be authorised or not. For the sake of simplicity, the choice between approving or not a request is left here completely non-deterministic, and rating information are calculated by an (unspecified) function \( \text{ratingInfo}(\_, \_) \). Consider now the above ‘compound’ bank service running in parallel with two clients: \[ (p_{bank} \cdot o_{charge}!(p_{cA}, 1234, 100, t_A) | [x, x_i] p_{cA} \cdot o_{resp}?!(x, t_A, x_i).s_A) | (p_{bank} \cdot o_{charge}!(p_{cB}, 5678, 200, t_B) | [y, y_i] p_{cB} \cdot o_{resp}?!(y, t_B, y_i).s_B) | [o_{check}, o_{checkOk}, o_{checkFail}] (* \text{BankInterface} | * \text{CreditRating}) \] After a certain number of computational steps have taken place, two instances of BankInterface (highlighted by a gray background) and two of CreditRating (highlighted by a dark gray background) would have been created and the system would be: \[ [x, x_i] p_{cA} \cdot o_{resp}?!(x, t_A, x_i).s_A | [y, y_i] p_{cB} \cdot o_{resp}?!(y, t_B, y_i).s_B | [o_{check}, o_{checkOk}, o_{checkFail}] (* \text{BankInterface} | * \text{CreditRating} | [x_{info}] (p_{bank} \cdot o_{checkFail}?(t_A, 1234, x_{info}), p_{cA} \cdot o_{resp}!(fail, t_A, x_{info}) + p_{bank} \cdot o_{checkOk}?(t_A, 1234, x_{info}), p_{cA} \cdot o_{resp}!(ok, t_A, x_{info})) | p_{bank} \cdot o_{checkOk}!(t_A, 1234, \text{ratingInfo}(1234, 100)) \] Notably, the BankInterface’s instance created to serve the client A (resp. B) is identified by the client data $t_A$ and 1234 (resp. $t_B$ and 5678) that are exploited as correlation values. In fact, we assume that, from the point of view of the bank service, each client request is uniquely identified by the timestamp of the transaction and the client’s credit card. Instead, if we consider the point of view of the client and suppose that he has only one credit card and has sent more charge requests for it, thus the timestamp would be enough to correlate a bank service response to a client instance. Recall that it is the responsibility of the service programmer to individuate the proper correlation data in a given conversation. Now, if the invocation along the endpoint $p_{bank} \cdot o_{checkOK}$ is performed (we assume $ratingInfo(1234, 100) = info$), since the sent message contains the correlation data $t_A$ and 1234, the interaction takes place with the instance created to serve the client A (indeed, $\mathcal{M}(\langle t_B, 5678, x_{info} \rangle, \langle t_A, 1234, info \rangle)$ does not hold): \[ [x, x_i] p_{cA} \cdot o_{resp} ?(x, t_A, x_i), s_A \\ [y, y_i] p_{cB} \cdot o_{resp} ?(y, t_B, y_i), s_B \\ [o_{check}, o_{checkOK}, o_{checkFail}] \\ (* \text{BankInterface} | * \text{CreditRating} \\ | p_{cA} \cdot o_{resp} !\langle ok, t_A, info \rangle \\ | [x_{info}] (p_{bank} \cdot o_{checkFail} ?(t_B, 5678, x_{info}), p_{cB} \cdot o_{resp} !\langle fail, t_B, x_{info} \rangle \\ + p_{bank} \cdot o_{checkOK} ?(t_B, 5678, x_{info}), p_{cB} \cdot o_{resp} !\langle ok, t_B, x_{info} \rangle)) \\ | p_{bank} \cdot o_{checkFail} !\langle t_B, 5678, ratingInfo(5678, 200) \rangle) \] Therefore, although two BankInterface’s instances waiting for a message along the endpoint $p_{bank} \cdot o_{checkOK}$ were available when the service is invoked, the message sent by the CreditRating’s instance has been delivered to the correct instance. It is worth noticing that, as witnessed by the above example, this correlation mechanism is flexible enough for allowing a single message to participate in multiparty conversations (indeed, the above conversation involves one provider service and two clients). Notice also that, differently from other correlation-based formal languages for SOC, such as ws-calculus [7], SOCK [12] and Blite [40], correlation variables in CWS are not syntactically distinguished by other data variables. In fact, correlation variables can be recognized by their use (as the variables $x_{ts}$ and $x_{cc}$ of the example above). This is due to the fact that CWS intends to be a foundational formalism, with a small number of simple primitives. ### 3.2. μCOWS: the protection-, kill- and time-free fragment of CWS The fragment of CWS presented in this section, namely μCOWS, extends μCOWS$^m$ with priority among concurrent activities. #### 3.2.1. Syntax and operational semantics The syntax of μCOWS and the set of laws defining the structural congruence coincide with those of μCOWS$^m$, shown in Tables 1 and 2, respectively. Instead, the labelled transition relation \[\begin{array}{c} s_1 \xrightarrow{n \triangleright \bar{w}} s'_1 \\ s_2 \xrightarrow{n \triangleleft \bar{v}} s'_2 \\ M(\bar{w}, \bar{v}) = \sigma \\ \text{noConf}(s_1 \mid s_2, n, \bar{v}, |\sigma|) \\ \hline s_1 \mid s_2 \xrightarrow{n \sigma \ell \bar{v}} s'_1 \mid s'_2 \end{array}\] (com\textsubscript{2}) \[\begin{array}{c} s_1 \xrightarrow{\alpha} s'_1 \\ \alpha \neq n \sigma \ell \bar{v} \\ \hline s_1 \mid s_2 \xrightarrow{\alpha} s'_1 \mid s_2 \end{array}\] (par\textsubscript{2}) \[\begin{array}{c} s_1 \xrightarrow{n \sigma \ell \bar{v}} s'_1 \\ \text{noConf}(s_2, n, \bar{v}, \ell) \\ \hline s_1 \mid s_2 \xrightarrow{n \sigma \ell \bar{v}} s'_1 \mid s_2 \end{array}\] (par\textsubscript{com}) \[\begin{array}{c} S \xrightarrow{n \sigma \omega(x \mapsto v) \ell \bar{v}} S' \\ \hline [x] S \xrightarrow{n \sigma \ell \bar{v}} S' \{x \mapsto v\} \end{array}\] (del\textsubscript{com2}) Table 6: $\mu$COWS operational semantics (additional rules) \[\begin{array}{c} \text{noConf}(u!e, n, \bar{v}, \ell) = \text{noConf}(0, n, \bar{v}, \ell) = \text{true} \\ \text{noConf}(n'?\bar{w}.s, n, \bar{v}, \ell) = \begin{cases} \text{false} & \text{if } n' = n \land |M(\bar{w}, \bar{v})| < \ell \\ \text{true} & \text{otherwise} \end{cases} \\ \text{noConf}(g + g', n, \bar{v}, \ell) = \text{noConf}(g, n, \bar{v}, \ell) \land \text{noConf}(g', n, \bar{v}, \ell) \\ \text{noConf}(s \mid s', n, \bar{v}, \ell) = \text{noConf}(s, n, \bar{v}, \ell) \land \text{noConf}(s', n, \bar{v}, \ell) \\ \text{noConf}([u] s, n, \bar{v}, \ell) = \begin{cases} \text{noConf}(s, n, \bar{v}, \ell) & \text{if } u \notin n \\ \text{true} & \text{otherwise} \end{cases} \\ \text{noConf}(* s, n, \bar{v}, \ell) = \text{noConf}(s, n, \bar{v}, \ell) \end{array}\] Table 7: There are not conflicting receives along $n$ matching $\bar{v}$ $\xrightarrow{\alpha}$ is the least relation over $\mu$COWS services induced by the rules in Tables 4 and 6, where rules (com\textsubscript{2}), (par\textsubscript{2}) and (del\textsubscript{com2}) replace (com), (par) and (del\textsubscript{com}), respectively. Labels are now generated by the following grammar: \[\alpha ::= n \triangleleft \bar{v} \mid n \triangleright \bar{w} \mid n \sigma \ell \bar{v}\] The new label $n \sigma \ell \bar{v}$ enriches the previous communication label $\sigma$ with information about the communication that has taken place, i.e. the endpoint, the transmitted values, and the length of the generated substitution. These information are carried during the inference of a computational step to establish a priority-based execution in the presence of conflicting receives. Specifically, $n \sigma \ell \bar{v}$ (with $\ell$ natural number) denotes execution of a communication over $n$ with matching values $\bar{v}$, originally generated substitution having $\ell$ pairs, and substitution $\sigma$ to be still applied. Now, computational steps are denoted by labels of the form $n \theta \ell \bar{v}$. Notation $u(\alpha)$, indicating the set of names and variables occurring in $\alpha$, is extended by letting $u(n \sigma \ell \bar{v}) = u(\sigma)$. The definition of the labelled transition relation exploits an auxiliary no conflict predicate $\text{noConf}(s, n, \bar{v}, \ell)$. The predicate, defined inductively by the clauses in Table 7, holds true if $s$ cannot immediately perform a receive over the endpoint $n$ matching $\bar{v}$ and generating a substitution $\sigma$ with $|\sigma| < \ell$. Notably, in the clauses for the choice and parallel operators the predicate holds true if and only if all arguments of the operators do not contain conflicting receives. We comment on the new rules. In $\mu$COWS, as mentioned above, the communication label $n \sigma \ell \bar{v}$, produced by rule $(com_2)$, carries information used to check the presence of conflicting receives in parallel components. Indeed, if more than one matching is possible, the receive that needs fewer substitutions is selected to progress (rules $(com_2)$ and $(par_{com})$). This mechanism permits to correlate different service communications thus implicitly creating interaction sessions and can be exploited to model the precedence of a service instance over the corresponding service specification when both can process the same request (see Section 3.2.2 for some examples). Rule $(del_{com_2})$ is similar to $(del_{com})$ (shown in Table 4) but deals with labels generated by communications subject to priority. Notably, during the inference of a transition labelled by $n \sigma \ell \bar{v}$, the length of the substitution to be applied decreases, while the length $\ell$ of the initial substitution does never change, which makes it suitable to check, in any moment, existence of a better matching, i.e. of parallel receives with greater priority. Execution of parallel services is interleaved (rule $(par_2)$), but when a communication is performed. In such case, the progress of the receive activity with greater priority must be ensured. ### 3.2.2. Examples We present now some examples and observations that point out the peculiarities of $\mu$COWS. **Multiple start activities.** Services could be able of receiving multiple messages in a statically unpredictable order and in such a way that the first incoming message triggers the creation of a service instance which subsequent messages are routed to. This would require all those receive activities that can be immediately executed (according to [15], Section 16.3, there are *multiple start activities*) to share a non-empty set of variables (the so-called *correlation set*). Consider, for example, a variant of the bank service that deals with joint accounts. Now, to charge a credit card associated to a joint account, the service requires each co-holder of the account to send a charge request, thus making sure that the transaction is authorized by all co-holders. An excerpt of such service running in parallel with two co-holder clients, willing to charge their card 1234 with 100 euros, is as follows: \[ (p_{bank} \cdot o_{charge1}! \langle p_{cA}, 1234, 100, t_A \rangle \mid s_A) \] \[ \mid (p_{bank} \cdot o_{charge2}! \langle p_{cB}, 1234, 100, t_B \rangle \mid s_B) \] \[ \mid * \{x_{cust1}, x_{cust2}, x_{cc}, x_{amount}, x_{ts1}, x_{ts2}\} (p_{bank} \cdot o_{charge1}? \langle x_{cust1}, x_{cc}, x_{amount}, x_{ts1} \rangle .s_1 \] \[ \mid p_{bank} \cdot o_{charge2}? \langle x_{cust2}, x_{cc}, x_{amount}, x_{ts2} \rangle .s_2) \] After an interaction with the client $B$, an instance running in parallel with the service definition is created: \[ (p_{bank} \cdot o_{charge1}! \langle p_{cA}, 1234, 100, t_A \rangle \mid s_A) \] \[ \mid s_B \] \[ \mid * \{x_{cust1}, x_{cust2}, x_{cc}, x_{amount}, x_{ts1}, x_{ts2}\} (p_{bank} \cdot o_{charge1}? \langle x_{cust1}, x_{cc}, x_{amount}, x_{ts1} \rangle .s_1 \] \[ \mid p_{bank} \cdot o_{charge2}? \langle x_{cust2}, x_{cc}, x_{amount}, x_{ts2} \rangle .s_2) \] \[ \mid \{x_{cust1}, x_{ts1}\} (p_{bank} \cdot o_{charge1}? \langle x_{cust1}, 1234, 100, x_{ts1} \rangle .s_1 \mid s_2) \cdot \sigma \] where $\sigma$ is $\{x_{cust2} \mapsto p_{cB}, x_{cc} \mapsto 1234, x_{amount} \mapsto 100, x_{ts2} \mapsto t_B\}$. Now, the service definition and the created instance, being both able to receive the same tuple $\langle p_{cA}, 1234, 100, t_A \rangle$ along the endpoint $p_{bank} \cdot o_{charge1}$, compete for the request $p_{bank} \cdot o_{charge1}! \langle p_{cA}, 1234, 100, t_A \rangle$, i.e. in WS-BPEL jargon, two *conflicting* receive activities (in the term above, highlighted by a gray background) are enabled. However, $\mu$COWS’s (prioritized) semantics, in particular rule $(com_2)$ in combination with rule $(par_{com})$, allows only the existing instance to evolve. Indeed, suppose to try to infer the transition corresponding to the interaction between client $A$ and the service definition. Then, the generated substitution would have length 4 and, hence, let $s_{inst}$ be the term representing the created instance, the predicate noConf($s_{inst}, p_{bank} \bullet O_{charge1} ?\langle x_{cust1}, x_{cc}, x_{amount}, x_{ts1} \rangle, s_1$) would not hold. In fact, the instance can perform a receive matching the same message and producing a substitution with fewer pairs (it has length 2). This way, the creation of a new instance is prevented and the only feasible computation leads to the following term: $$ s_A \\ | \\ s_B \\ | \\ * [x_{cust1}, x_{cust2}, x_{cc}, x_{amount}, x_{ts1}, x_{ts2}] (p_{bank} \bullet O_{charge1} ?\langle x_{cust1}, x_{cc}, x_{amount}, x_{ts1} \rangle . s_1 \\ | \\ p_{bank} \bullet O_{charge2} ?\langle x_{cust2}, x_{cc}, x_{amount}, x_{ts2} \rangle . s_2) $$ where $\sigma'$ is $[x_{cust1} \mapsto p_cA, x_{ts1} \mapsto t_A]$. It is worth noticing that the above considerations still hold if we use choice rather than parallel to compose the start activities of the bank service, as shown below: $$ * [x_{cust1}, x_{cust2}, x_{cc}, x_{amount}, x_{ts1}, x_{ts2}] \\ (p_{bank} \bullet O_{charge1} ?\langle x_{cust1}, x_{cc}, x_{amount}, x_{ts1} \rangle , p_{bank} \bullet O_{charge2} ?\langle x_{cust2}, x_{cc}, x_{amount}, x_{ts2} \rangle , \ldots ) \\ + \\ p_{bank} \bullet O_{charge2} ?\langle x_{cust2}, x_{cc}, x_{amount}, x_{ts2} \rangle , p_{bank} \bullet O_{charge1} ?\langle x_{cust1}, x_{cc}, x_{amount}, x_{ts1} \rangle , \ldots ) $$ noConf predicate. Rules $(com_2)$ and $(par_{com})$ use the predicate noConf($\ldots, n, \bar{v}, \ell$) for checking the presence of concurrent conflicting receives. When these rules must be used to infer a transition, a preventive $\alpha$-conversion may be necessary. Indeed, condition noConf($n!?\bar{w}.s, n, \bar{v}, \ell$) might single out patterns that could not really match the transmitted values. These ‘false alarms’ would block the inference (but allow us to stay on the ‘safe’ side). For instance, consider the following term: $$ n!'\langle m \rangle | [x] n?'\langle x \rangle | [m] n?'\langle m \rangle \tag{1} $$ Apparently, both receive activities match the invoke activity, but only $n?'\langle x \rangle$ can synchronise with $n!'\langle m \rangle$, because the argument of $n?'\langle m \rangle$ is a restricted name, thus it is certainly different from the name transmitted by the invoke. However, if we try to naively infer the transition corresponding to the synchronisation between $n!'\langle m \rangle$ and $n?'\langle x \rangle$, we fail due to rules $(com_2)$ or $(par_{com})$. In fact, noConf($[m] n?'\langle m \rangle, n, \langle m \rangle, 1$) does not hold because $M(m, m)$ produces the substitution $\emptyset$, that is smaller than $\{x \mapsto m\}$, that is produced by $M(x, m)$. However, the wanted transition can be inferred by first applying $\alpha$-conversion. In fact, (1) can be re-written as follows: $$ n!'\langle m \rangle | [x] n?'\langle x \rangle | [m'] n?'\langle m' \rangle $$ Now, it is clear that $n?'\langle m' \rangle$ is not a conflicting receive, because $M(m', m)$ is undefined. The same observations hold for the term: $$ [m] (n!'\langle m \rangle | [x] n?'\langle x \rangle ) | n?'\langle m \rangle $$ Again, $\alpha$-conversion is necessary for inferring the correct transitions. Instead, if in (1) we replace delimitation of $m$ with that of $n$, the correct transition can be directly inferred because noConf($[n] n?'\langle m' \rangle, n, \bar{v}, \ell$) holds true. Default behaviour. The previous examples show that the $\mu$COWS’s priority mechanism can be used for orchestration purposes, i.e. to properly coordinate interactions among services. However, this priority mechanism can be also exploited to coordinate activities (i.e. to manage their interdependencies) within the same service. For example, in the variant of the service $CreditRating$ reported below \[ [x_{ts}, x_{cc}, x_{a}] \left( p_{bank} \cdot o_{check} ? \langle x_{ts}, 4321, x_{a} \rangle . p_{bank} \cdot o_{checkFail} ! \langle x_{ts}, 4321, ratingInfo(4321, x_{a}) \rangle \right. \\ + p_{bank} \cdot o_{check} ? \langle x_{ts}, 5432, x_{a} \rangle . p_{bank} \cdot o_{checkFail} ! \langle x_{ts}, 5432, ratingInfo(5432, x_{a}) \rangle \\ + p_{bank} \cdot o_{check} ? \langle x_{ts}, 6543, x_{a} \rangle . p_{bank} \cdot o_{checkFail} ! \langle x_{ts}, 6543, ratingInfo(6543, x_{a}) \rangle \\ + p_{bank} \cdot o_{check} ? \langle x_{ts}, x_{cc}, x_{a} \rangle . \\ \left[ p, o \right] \left( p \cdot o ! \langle \rangle | p \cdot o ? \langle \rangle . p_{bank} \cdot o_{checkOK} ! \langle x_{ts}, x_{cc}, ratingInfo(x_{cc}, x_{a}) \rangle \right. \\ + p \cdot o ? \langle \rangle . p_{bank} \cdot o_{checkFail} ! \langle x_{ts}, x_{cc}, ratingInfo(x_{cc}, x_{a}) \rangle ) \right) \] the priority mechanism enables implementing a sort of ‘default’ behaviour. Indeed, when the service is invoked along the endpoint $p_{bank} \cdot o_{check}$ with a black-listed credit card number (e.g. numbers 4321, 5432, 6543) a negative response is returned; instead, if the credit card number is not in the black list, the service by default behaves in a non-deterministic way. For example, if $CreditRating$ is invoked by $p_{bank} \cdot o_{check} ? \langle t, 4321, 100 \rangle$, although the invocation and the receive $p_{bank} \cdot o_{check} ? \langle x_{ts}, x_{cc}, x_{a} \rangle$ do match, the priority mechanism ensures that the service replies with $p_{bank} \cdot o_{checkFail} ! \langle t, 4321, ratingInfo(4321, 100) \rangle$. 3.3. COWS: the time-free fragment of C$\otimes$WS COWS, which is basically the untimed fragment of C$\otimes$WS, is obtained by enriching $\mu$COWS with two primitives permitting to express transactional behaviours of services and scenarios with fault and compensation handling. 3.3.1. Syntax The syntax of COWS is given in Table 8. Besides the sets of values and variables, we also use a countable set of (killer) labels (ranged over by $k, k', \ldots$). Services syntax is extended with the kill activity $kill(.)$ and the protection operator $\parallel \parallel$, while now the delimitation $[\ldots]$ accepts as first argument also killer labels (the new constructs are highlighted in Table 8 by a gray background). The $kill$ activity forces the immediate termination of concurrent activities which are not enclosed within the $protection$ operator. The delimitation of a killer label is then used to confine the killing effect. Notably, expressions do not include killer labels that, hence, are non-communicable values. This way the scope of killer labels cannot be dynamically extended and the activities whose termination would be forced by execution of a kill can be statically determined. We still use $w$ to range over values and variables, $u$ to range over names and variables, while we use $e$ to range over elements, namely killer labels, names and variables. Delimitation now is a binder also for killer labels. $fe(t)$ denotes the set of free elements in $t$, and $fk(t)$ denotes the set of free killer labels in $t$. A closed service is a COWS term without free variables and killer labels. 3.3.2. Operational semantics The structural congruence $\equiv$ for COWS, besides the laws in Table 2, additionally includes the laws in Table 9. Notably, the last law of Table 9 prevents extending the scope of a killer label $k$ when it is free in $s_1$ or $s_2$ (this avoids involving $s_1$ in the effect of a kill activity inside $s_2$ and is essential to statically determine which activities can be terminated by a kill). Thus, this law can be used to garbage-collect killer labels, e.g. $[k] n! \bar{e} \equiv [k] (n! \bar{e} | 0) \equiv n! \bar{e} | [k] 0 \equiv n! \bar{e} | 0 \equiv n! \bar{e}$. | **Killer labels:** | $k, k', \ldots$ | | **Expressions:** | $e, e', \ldots$ | | **Variables:** | $x, y, \ldots$ | | **Values:** | $v, v', \ldots$ | | **Names:** | $n, m, \ldots$ | | **Partners:** | $p, p', \ldots$ | | **Operations:** | $o, o', \ldots$ | | **Elements (Labels/Vars/Names):** | $e, e', \ldots$ | | **Variables/Names:** | $u, u', \ldots$ | | **Variables/Values:** | $w, w', \ldots$ | | **Endpoints:** | without variables: $p \cdot o, n, \ldots$ | | may contain variables: $u \cdot u', u, \ldots$ | | **Services:** | | | $s ::= \begin{array}{l} \text{kill}(k) \\ u \cdot u'!e \\ g \\ s \mid s \\ \{s\} \\ [e] s \\ *s \end{array}$ | | | **Receive-guarded choice:** | | | $g ::= \begin{array}{l} 0 \\ p \cdot o?w.s \\ g + g \end{array}$ | | Table 8: COWS syntax | $\{0\} \equiv 0$ | $[k]0 \equiv 0$ | | $\{\{s\}\} \equiv \{s\}$ | $[e_1][e_2]s \equiv [e_2][e_1]s$ | | $\{[e]s\} \equiv [e]\{s\}$ | $s_1 \mid [k]s_2 \equiv [k](s_1 \mid s_2)$ if $k \notin fk(s_1) \cup fk(s_2)$ | Table 9: COWS structural congruence (additional laws) To define the labelled transition relation, we need two new auxiliary functions. The function $halt(.)$ takes a service $s$ as an argument and returns the service obtained by only retaining the protected activities inside $s$. $halt(.)$ is defined inductively on the syntax of services. The most significant case is $halt(\{s\}) = \{s\}$. In the other cases, $halt(.)$ returns $0$, except for parallel composition, delimitation and replication operators, for which it acts as an homomorphism. $$halt(\text{kill}(k)) = halt(u!e) = halt(g) = 0$$ $$halt(\{s\}) = \{s\}$$ $$halt(s_1 \mid s_2) = halt(s_1) \mid halt(s_2)$$ $$halt([e]s) = [e]halt(s)$$ $$halt(*s) = *halt(s)$$ Then, in Table 10, we inductively define the predicate noKill($s, e$), that holds true if either $e$ is not a killer label or $e = k$ and $s$ cannot immediately perform a free kill activity $\text{kill}(k)$. Moreover, the predicate noConf($s, n, \bar{v}, \ell$), defined for $\mu$COWS by the rules in Table 7, is extended to COWS by adding the following rules: $$\text{noConf}(\text{kill}(k), n, \bar{v}, \ell) = \text{true}$$ $$\text{noConf}([e]s, n, \bar{v}, \ell) = \begin{cases} \text{noConf}(s, n, \bar{v}, \ell) & \text{if } e \notin n \\ \text{true} & \text{otherwise} \end{cases}$$ The labelled transition relation $\rightarrow^a$ is the least relation over services induced by the rules in Tables 4, 6 and 11, where $(com_2), (del_2)$ and $(del_{com_2})$ replace $(com), (del)$ and $(del_{com})$, respectively. \begin{table}[h] \centering \begin{tabular}{ll} noKill(s, e) = \textbf{true} & if fk(e) = \emptyset \\ noKill(kill(k), k) = \textbf{false} & noKill([e] s, k) = noKill(s, k) \land noKill(s', k) \\ noKill(kill(k'), k) = \textbf{true} & if \( e \neq k' \) \\ noKill(u!e, k) = noKill(g, k) = \textbf{true} & noKill([s] s, k) = noKill(* s, k) = noKill(s, k) \end{tabular} \caption{There are no active kill(k)} \end{table} \begin{table}[h] \centering \begin{tabular}{l} \textbf{kill}(k) \xrightarrow{k} 0 \quad (kill) \\ \textbf{prot} \\ \textbf{par}_3 \\ \textbf{par}_{kill} \\ \textbf{halt}(s_2) \\ \textbf{del}_{kill1} \\ \textbf{del}_{kill2} \\ \textbf{del}_{kill3} \\ \textbf{del}_2 \end{tabular} \caption{COWS operational semantics (additional rules)} \end{table} and \((\text{par}_3)\) replaces rules \((\text{par})\) and \((\text{par}_2)\). Labels are now generated by the following grammar: \[ \alpha ::= n \triangleleft \tilde{v} \mid n \triangleright \tilde{w} \mid n \sigma \ell \tilde{v} \mid k \mid \dagger \] The meaning of the new labels is as follows: \(k\) denotes execution of a request for terminating a term from within the delimitation \([k]\), and \(\dagger\) denotes a computational step corresponding to taking place of forced termination. In the sequel, we use \(e(\alpha)\) to denote the set of elements occurring in \(\alpha\) (it is defined similarly to \(u(\alpha)\), Section 3.1.2, page 12, and Section 3.2.1, page 19). Let us now comment on the added rules. Activity \textbf{kill}(k) forces termination of all unprotected parallel activities (rules \((\text{kill})\) and \((\text{par}_{kill})\)) inside an enclosing \([k]\), that stops the killing effect by turning the transition label \(k\) into \(\dagger\) (rule \((\text{del}_{kill1})\)). Such delimitation, whose existence is ensured by the assumption that the semantics is only defined for closed services, prevents a single service to be capable to stop all the other parallel services, which would be unreasonable in a service-oriented setting (as services are loosely coupled and organized in different administrative domains). Critical activities can be protected from killing by putting them into a protection \([s]\); this way, \([s]\) behaves like \(s\) (rule \((\text{prot})\)). Similarly, \([e] s\) behaves like \(s\) (rule \((\text{del}_2)\)), except when the transition label \(\alpha\) contains \(e\), in which case \(\alpha\) must correspond either to a communication assigning a value to \(e\) (rule \((\text{del}_{com2})\)) or to a kill activity for \(e\) (rule \((\text{del}_{kill1})\)), or when a free kill activity for \(e\) is active in \(s\), in which case only actions corresponding to kill activities can be executed (rules \((\text{del}_{kill2})\) and \((\text{del}_{kill3})\)). This means that kill activities are executed \textit{eagerly} with respect to the activities enclosed within the delimitation of the corresponding killer label. Execution of parallel services is interleaved (rule \((\text{par}_3)\)), but when a kill activity or a communication is performed. Indeed, the former must trigger termination of all parallel services (according to rule \((\text{par}_{\text{halt}})\), while the latter must ensure that the receive activity with greater priority progresses (rules \((\text{com}_2)\) and \((\text{par}_{\text{com}})\)). ### 3.3.3. Examples We present here some examples aimed at clarifying the peculiar features of COWS. We will show in Section 4 how the COWS activities dealing with termination, i.e. kill and protection, can be used for implementing fault and compensation handling. **Protected kill activity.** The following simple example illustrates the effect of executing a kill activity within a protection block: \[ [k] (\{s_1 \mid s_2\} \mid \text{kill}(k)) \mid s_3 \mid s_4 \xrightarrow{\uparrow} [k] \{s_2\} \mid s_4 \] where, for simplicity, we assume that \(halt(s_1) = halt(s_3) = 0\). In essence, \(\text{kill}(k)\) terminates all parallel services inside delimitation \([k]\) (i.e. \(s_1\) and \(s_3\)), except those that are protected at the same nesting level of the kill activity (i.e. \(s_2\)). **Interplay between communication and kill activity.** Kill activities can break communication, as the following example shows: \[ n!v \mid [k] ([x] n?x.s \mid \text{kill}(k)) \xrightarrow{\uparrow} n!v \mid [k] x 0 \] In fact, due to the priority of the kill activity over communication, this is the only feasible computational step of the above term. Communication can however be guaranteed by protecting the receive activity, as follows \[ n!v \mid [k] ([x] n?x.s \mid \text{kill}(k)) \xrightarrow{\uparrow} n!v \mid [k] [x] n?x.s \\ [x] (n!v \mid [k] n?x.s) \equiv n01v \xrightarrow{n01v} [k] \{s \cdot \{x \mapsto v\}\} \] Notably, priority of kill activities over communication acts only with respect to the activities enclosed within the delimitation of the corresponding killer labels (i.e. priority is *local* to killer label scopes). For instance, if we re-write the above example as follows: \[ [y] n?y.s' \mid n!v \mid [k] ([x] n?x.s \mid \text{kill}(k)) \] communication between \(n!v\) and \(n?x\) is still preempted by \(\text{kill}(k)\), while communication with \(n?y\) can take place and lead to \[ s' \cdot \{y \mapsto v\} \mid [k] ([x] n?x.s \mid \text{kill}(k)) \] **Non-communicability of killer labels.** We require killer labels not to be communicable to avoid a service be capable to indiscriminately stop the execution of other services’ activities. However, when desired, this behaviour can be modelled in COWS. Consider, for example, the following term where two parallel services share the private name \(stop\): \[ [stop] (s_1 \mid s_2) \mid s_3 \] where $s_1 \triangleq [k](n?(\text{stop}), \text{kill}(k) \mid s'_1)$ and $s_2 \triangleq n!(\text{stop}) \mid s'_2$. In $s_1$, the activity $\text{kill}(k)$ is prefixed by the receive $n?(\text{stop})$ that does not allow forced termination to take place until the ‘termination signal’ $\text{stop}$ is received. In fact, if a communication between $s_1$ and $s_2$ takes place along the endpoint $n$, the term evolves to $$[\text{stop}] ( [k] (\text{kill}(k) \mid s'_1) \mid s'_2 ) \mid s_3$$ Now, due to the priority of the kill activity over communication, the term $[k] (\text{kill}(k) \mid s'_1)$ can only perform a kill activity and evolve, e.g., to $[k]\text{halt}(s'_1)$. ### 3.4. C$\odot$WS The full calculus, C$\odot$WS, is obtained by enriching COWS with an analogous of WS-BPEL’s $\text{wait}$ activity [15, Section 10.7] which causes execution of the invoking service to be suspended until the time interval specified as an argument has elapsed\(^3\). The extension of COWS with specific activities dealing with time is motivated by the fact that it is still unknown to what extent timed computation can be reduced to untimed forms of computation [41]. #### 3.4.1. Syntax We assume that the set of values now includes a set of positive numbers (ranged over by $\delta$, $\delta'$, …), used to represent \textit{time intervals}. The syntax of COWS is extended as follows (the new construct is highlighted by a gray background): $$g ::= 0 \mid p \cdot o?w.s \mid g + g \mid \odot_{\epsilon} s$$ Basically, guards are extended with the \textit{wait activity} $\odot_{\epsilon}$, that specifies the time interval, whose value is given by evaluation of $\epsilon$, the executing service has to wait for. Consequently, the choice construct can now be guarded both by message reception and timeout expiration, like WS-BPEL $\text{pick}$ activity [15, Section 11.5]. We assume that evaluation of expressions and execution of basic activities, except for $\odot_{\epsilon}$, are instantaneous (i.e. do not consume time units) and that time elapses between them. #### 3.4.2. Operational semantics The operational semantics of C$\odot$WS is defined in terms of the labelled transition relation $\rightarrow_{\hat{\alpha}}$, where $\hat{\alpha}$ stands for $\alpha$ or $\delta$ (that models time elapsing), obtained by adding the rules shown in Table 12 to those defining the semantics of COWS (see Section 3.3.2 and Tables 4, 6 and 11). Let us briefly comment on the new rules. Time can elapse while waiting on receive/invoke activities, rules $(\text{rec}_{\text{elaps}})$ and $(\text{inv}_{\text{elaps}})$. When time elapses, but the timeout is still not expired, the argument of wait activities is updated (rule $(\text{wait}_{\text{elaps}})$). Time elapsing cannot make a choice within a choice activity (rule $(\text{choice}_2)$), while the occurrence of a timeout can. Indeed, this is signalled by label $\uparrow$, thus it is a computational step, generated by rule $(\text{wait}_{\text{now}})$ and used by rule $(\text{choice})$ (in Table 4) to discard the alternative branches. Time elapses synchronously for all services running in parallel: this is modelled by rule $(\text{par}_{\text{sync}})$ and by the remaining rules for empty activity (rule $(\text{nil}_{\text{elaps}})$), replication (rule $(\text{rep}_{\text{elaps}})$), wait activity (rule $(\text{wait}_{\text{err}})$), protection \(^3\)For the sake of simplicity, we do not consider here the ‘until’ variant of the wait activity, which causes suspension of the invoking service until the \textit{absolute} time reaches the value specified as an argument, and refer the interested reader to [21] for an account of this variant. (rule (prot\_elaps)) and delimitation (rule (scope\_elaps)). In particular, rule (wait\_err) enables time passing for the wait activity also when the expression $\epsilon$ used as an argument does not return a positive number; in this case the argument of the wait is left unchanged. Note that, in agreement with its eager semantics, the kill activity does not allow time to pass. In C\textsuperscript{\textregistered}WS, computational steps also include transitions labelled by $\delta$ corresponding to time elapsing. Since time elapses synchronously for all services in parallel, we can think of as all services run on a same service engine and share the same clock. By further extending the language syntax, as shown in [21], we can make explicit the notion of service engine and of deployment of services on engines. This way, we can model time so that it progresses synchronously for services located within the same engine and asynchronously among services deployed onto different engines. ### 3.4.3. Examples We end this section with some examples of application of the timed constructs provided by the full language C\textsuperscript{\textregistered}WS. Such constructs are also exploited in Section 5.2 to model a variant of the automotive case study presented in Section 2.2. **WS-BPEL pick activity.** Consider again the bank service scenario used in other previous examples in Sections 3.1.3 and 3.2.2, where now clients, after having sent requests for charging their credit cards, wait for a response for a given amount of time. By using the wait activity and the choice operator, we can define in C\textsuperscript{\textregistered}WS a client service implementing a pick activity \textit{à la} WS-BPEL as follows: $$p_{bank} \cdot o_{charge}! \langle p_{cA}, 1234, 100, t_A \rangle \mid [x, x_i] (p_{cA} \cdot o_{resp}? \langle x, t_A, x_i \rangle . s_A)$$ $$+ \bigcirc_{15} \cdot s_{chargeTimeoutExpired}$$ If the Bank service does not reply in the given amount of time units (e.g. 15 minutes), the client service will discard the client activities $p_{cA} \cdot o_{resp}? \langle x, t_A, x_i \rangle . s_A$ (hence, the activity $s_A$ dealing with the Bank response will never be carried out) and execute the activity $s_{chargeTimeoutExpired}$ handling the non-response event. This latter activity can, e.g., ask the driver to provide the data of another credit card, or simply show an error message inviting the driver to contact the assistance services by herself/himself; in any case $s_{chargeTimeoutExpired}$ may contact or not the Bank service to inform it that the response to the sent request is not waited any longer. Of course, if a response from the Bank is received before the timeout expiration, the timeout is disabled and $s_A$ is executed. **Time-bound search.** Consider a registry service storing information about on road services and providing searching functionalities to its clients (see, e.g., Section 5.2). A search that continues to query the data stored in the registry until a given timeout expires can be rendered in C\textsuperscript{O}WS as a term of the following form: \[ [k] \left( s_{search} \mid \ominus \delta . (\text{kill}(k) \mid \parallel s_{searchComplete} \parallel) \right) \] where \(s_{search}\) performs the search and \(s_{searchComplete}\) sends the search result to the client. After \(\delta\) time units, the search is stopped by means of the kill activity and then the result is communicated. ### 4. The automotive case study: specification and analysis in COWS We present in this section the most relevant parts of the specification in COWS of the automotive case study introduced in Section 2.2 (the complete specification is reported in [42]) and provide a brief description of a few properties that it satisfies. Notably, to specify the case study we use COWS rather than C\textsuperscript{O}WS, because the verification methods and tools currently available only apply to the former language. We further refine the case study and its specification, in order to illustrate an application of the C\textsuperscript{O}WS constructs for managing time and constraints, later on in Section 5.2. The COWS term modelling the overall scenario is: \[ [p_{car}] \left( SensorsMonitor \mid GpsSystem \mid Discovery \mid Reasoner \mid Orchestrator \right) \\ \mid Bank \mid OnRoadRepairServices \] All services of the in-vehicle platform share a private partner name \(p_{car}\), that is used for intra-vehicle communication and is passed to external services (e.g. the bank service) for receiving data from them. When an engine failure occurs, a signal (raised by *SensorsMonitor*) triggers the execution of the *Orchestrator* and activates the corresponding ‘recovery’ service. *Orchestrator*, the most important component of the in-vehicle platform, is \[ [x_{carData}, x_{ts}] \left( p_{car} \cdot o_{engineFailure} ?(x_{ts}, x_{carData}) \cdot s_{engfail} + p_{car} \cdot o_{lowOilFailure} ?(x_{ts}, x_{carData}) \cdot s_{lowoil} + \ldots \right) \] This term uses the choice operator \(\rightarrow_\bot\) to pick one of those alternative recovery behaviours whose execution can start immediately. Notice that, while executing a recovery behaviour, *Orchestrator* does not accept other recovery requests. We are also assuming, for the sake of simplicity, that it is reinstalled at the end of the recovery task. The recovery behaviour *sengfail* executed when an engine failure occurs is \[ [p_{end}, o_{end}, x_{info}, x_{loc}, x_{list}, o_{undo}] \\ \left( [k] \left( CardCharge \mid FindServices \right) \mid p_{end} \cdot o_{end}?() \cdot p_{end} \cdot o_{end}?() \cdot ChooseAndOrder \right) \] \(p_{end} \cdot o_{end}\) is a scoped endpoint along which successful termination signals (i.e. communications that carry no data) are exchanged to orchestrate execution of the different components. *CardCharge* corresponds to the homonymous UML action of Figure 2, while *FindServices* corresponds to the sequential composition of the UML actions *RequestLocation* and *FindServices*. The two terms are defined as follows: \[ \text{CardCharge} \triangleq p_{\text{bank}} \cdot o_{\text{charge}}! \langle p_{\text{car}}, ccNum, amount, x_{\text{loc}} \rangle \\ \quad \| p_{\text{car}} \cdot o_{\text{resp}}? \langle fail, x_{\text{is}}, x_{\text{info}} \rangle . \text{kill}(k) \\ \quad + p_{\text{car}} \cdot o_{\text{resp}}? \langle ok, x_{\text{is}}, x_{\text{info}} \rangle . \\ \quad ( p_{\text{end}} \cdot o_{\text{end}}! \langle \rangle ) \\ \quad | p_{\text{car}} \cdot o_{\text{undo}}? \langle cc \rangle . p_{\text{car}} \cdot o_{\text{undo}}? \langle cc \rangle . p_{\text{bank}} \cdot o_{\text{revoke}}! \langle x_{\text{is}}, ccNum \rangle ) \| \] \[ \text{FindServices} \triangleq p_{\text{car}} \cdot o_{\text{reqLoc}}! \langle \rangle \\ \quad | p_{\text{car}} \cdot o_{\text{respLoc}}? \langle x_{\text{loc}} \rangle . \\ \quad ( p_{\text{car}} \cdot o_{\text{findServ}}! \langle x_{\text{loc}}, servicesType \rangle ) \\ \quad | p_{\text{car}} \cdot o_{\text{found}}? \langle x_{\text{list}} \rangle . p_{\text{end}} \cdot o_{\text{end}}! \langle \rangle \\ \quad + p_{\text{car}} \cdot o_{\text{noFound}}? \langle \rangle . \\ \quad ( (p_{\text{car}} \cdot o_{\text{undo}}! \langle cc \rangle | p_{\text{car}} \cdot o_{\text{undo}}! \langle cc \rangle ) \| \text{kill}(k) ) ) \] Therefore, the recovery service concurrently contacts service Bank, to charge the car owner’s credit card with a security amount, and services GpsSystem and Discovery, to get the car’s location (stored in \(x_{\text{loc}}\)) and a list of on road services (stored in \(x_{\text{list}}\)). When both activities terminate (the fresh endpoint \(p_{\text{end}} \cdot o_{\text{end}}\) is used to appropriately synchronise their successful terminations), the recovery service forwards the obtained list to service Reasoner, that will choose the most convenient services (see definition of ChooseAndOrder). Whenever services finding fails, FindServices terminates the whole recovery behaviour (by means of the kill activity \text{kill}(k)) and sends two signals \(cc\) (abbreviation of ‘card charge’) along the endpoint \(p_{\text{car}} \cdot o_{\text{undo}}\). Similarly, if charging the credit card fails, then CardCharge terminates the whole recovery behaviour. Otherwise, it installs a compensation handler that takes care of revoking the credit card charge. Activation of this compensation activity requires two signals \(cc\) along \(p_{\text{car}} \cdot o_{\text{undo}}\) and, thus, takes place either whenever FindService fails or, as we will see soon, whenever both garage and car rental orders fail. ChooseAndOrder tries to order the selected services by contacting a car rental and, concurrently, a garage and a tow truck. It is defined as follows: \[ [x_{\text{gps}}] ( p_{\text{car}} \cdot o_{\text{choose}}! \langle x_{\text{list}} \rangle \\ \quad | [x_{\text{garage}}, x_{\text{towTruck}}, x_{\text{rentalCar}}] p_{\text{car}} \cdot o_{\text{chosen}}? \langle x_{\text{garage}}, x_{\text{towTruck}}, x_{\text{rentalCar}} \rangle . \\ \quad ( \text{OrderGarageAndTowTruck} \mid \text{RentCar} ) ) \] \[ \text{OrderGarageAndTowTruck} \triangleq [x_{\text{garageInfo}}] \\ \quad ( x_{\text{garage}} \cdot o_{\text{orderGar}}! \langle p_{\text{car}}, x_{\text{carData}} \rangle \\ \quad | p_{\text{car}} \cdot o_{\text{garageFail}}? \langle \rangle . \\ \quad ( p_{\text{car}} \cdot o_{\text{undo}}! \langle cc \rangle | [p, o] ( p \cdot o! \langle x_{\text{loc}} \rangle | p \cdot o? \langle x_{\text{gps}} \rangle ) ) \\ \quad + p_{\text{car}} \cdot o_{\text{garageOk}}? \langle x_{\text{gps}}, x_{\text{garageInfo}} \rangle . \\ \quad ( \text{OrderTowTruck} \\ \quad | p_{\text{car}} \cdot o_{\text{undo}}? \langle gar \rangle . \\ \quad ( x_{\text{garage}} \cdot o_{\text{cancel}}! \langle p_{\text{car}} \rangle \\ \quad | p_{\text{car}} \cdot o_{\text{undo}}! \langle cc \rangle | p_{\text{car}} \cdot o_{\text{undo}}! \langle rc \rangle ) ) ) \) \[ \text{OrderTowTruck} \triangleq [x_{\text{towInfo}}] \\ \quad ( x_{\text{towTruck}} \cdot o_{\text{orderTow}}! \langle p_{\text{car}}, x_{\text{loc}}, x_{\text{gps}} \rangle \\ \quad | p_{\text{car}} \cdot o_{\text{towTruckFail}}? \langle \rangle . p_{\text{car}} \cdot o_{\text{undo}}! \langle gar \rangle \\ \quad + p_{\text{car}} \cdot o_{\text{towTruckOK}}? \langle x_{\text{towInfo}} \rangle ) \] \[ RentCar \triangleq [x_{rcInfo}] \\ (x_{rentalCar} \cdot O_{orderRC}!(p_{car}, x_{gps}) \\ | p_{car} \cdot O_{rentalCarFail}?(x) \cdot p_{car} \cdot O_{undo}!(cc) \\ + p_{car} \cdot O_{rentalCarOK}?(x_{rcInfo}) \cdot p_{car} \cdot O_{undo}?(rc) \cdot x_{rentalCar} \cdot O_{redirect}!(p_{car}, x_{loc})) \] If ordering a garage fails, the compensation of the credit card charge is invoked by sending a signal \( cc \) along the endpoint \( p_{car} \cdot O_{undo} \), and the car’s location (stored in \( x_{loc} \)) is assigned to variable \( x_{gps} \) (whose value will be passed to the rental car service). This assignment is rendered as a communication along the private endpoint \( p \cdot o \). Otherwise, the tow truck ordering starts and the garage’s location is assigned to variable \( x_{gps} \). Moreover, a compensation handler is installed; it will be activated whenever tow truck ordering fails and, in that case, attempts to cancel the garage order (by invoking operation \( o_{cancel} \)) and to compensate the credit card charge and the rental car order (by sending signal \( cc \) and \( rc \) along \( p_{car} \cdot O_{undo} \)). Renting a car proceeds concurrently and, in case of successful completion, the compensation handler for the redirection of the rented car is installed; otherwise, the compensation of the credit card charge is invoked. For the sake of presentation, we relegate the specification of the remaining components of the in-vehicle platform, i.e. SensorsMonitor, GpsSystem, Discovery and Reasoner, to [42]. The COWS specification of the service Bank is given by the compound term introduced in Section 3.1.3 (paragraph “Message correlation” at page 16) where the subservice BankInterface is extended with compensation activities (highlighted below by a gray background) for revoking credit card charges: \[ BankInterface \triangleq \\ [x_{cust}, x_{cc}, x_{amount}, x_{ts}] \\ p_{bank} \cdot O_{charge}?(x_{cust}, x_{cc}, x_{amount}, x_{ts}). \\ (p_{bank} \cdot O_{check}!x_{ts}, x_{cc}, x_{amount}) \\ | [x_{info}] (p_{bank} \cdot O_{checkFail}?(x_{ts}, x_{cc}, x_{info}) \cdot x_{cust} \cdot O_{resp}!(fail, x_{ts}, x_{info}) \\ + p_{bank} \cdot O_{checkOK}?(x_{ts}, x_{cc}, x_{info}). \\ [k'] (x_{cust} \cdot O_{resp}!(ok, x_{ts}, x_{info}) | p_{bank} \cdot O_{revoke}?(x_{ts}, x_{cc}), kill(k')) ) \] In case of a positive answer, the possibility of revoking the request through invocation of operation \( o_{revoke} \) is enabled (in fact, should the discovery phase or ordering the services fail, the customer charge operation should be cancelled in order to implement the wanted transactional behaviour). Revocation causes deletion of the reply to the client, if this has still to be performed. OnRoadRepairServices is actually a composition of various on road services, i.e. it is \[ Garage_1 | Garage_2 | TowTruck_1 | TowTruck_2 | RentalCar_1 | RentalCar_2 | \ldots \] Such concurrent on road services are all modelled in a similar way, e.g. \[ Garage_i \triangleq * [x_{cust}, x_{sensorsData}, O_{checkOK}, O_{checkFail}] \\ p_{garage\_i} \cdot O_{orderGar}?(x_{cust}, x_{sensorsData}). \\ (p_{garage\_i} \cdot O_{checkOK}!() | p_{garage\_i} \cdot O_{checkFail}!() \\ | p_{garage\_i} \cdot O_{checkFail}?(x) \cdot x_{cust} \cdot O_{garageFail}!() \\ + p_{garage\_i} \cdot O_{checkOK}?(x). \\ [k] (x_{cust} \cdot O_{garageOK}!(garageGPS_i, garageInfo_i) \\ | p_{garage\_i} \cdot O_{cancel}?(x_{cust}), kill(k)) ) \] For simplicity, success or failure of garage orders are modelled by means of non-deterministic choice by exploiting internal operations $o_{checkOK}$ and $o_{checkFail}$. To give a flavour of which kind of analyses COWS’s specifications can be subject to, we end this section by illustrating some properties of the automotive case study that can be verified by using two of the techniques devised so far. The type system introduced in [17] uses types to express and enforce policies for regulating the exchange of data among services. Over the specification of the automotive scenario, this approach enables the verification of such confidentiality properties as, e.g., “information about the credit card and location of a driver in trouble cannot become available to unauthorized users” and “critical data sent by on-road services to the in-vehicle services, e.g. cost and quality of the service supplied, are not disclosed to competitors”. The logical verification methodology presented in [18] permits describing service properties by means of a branching-time temporal logic, specifically designed to express in a convenient way distinctive aspects of services, and verifying them over COWS specifications by exploiting an on-the-fly model checker. Over the specification of the automotive scenario, this methodology enables the specification and verification of such functional properties as, e.g., “once the service Orchestrator is requested, it always provides at least one response about the status of the garage/tow truck ordering and at least one response about the status of the car renting”, “it will never happen that, after the driver’s credit card has been charged and some service ordered, the credit card charge is revoked”, and “after the garage has been booked, if the tow truck service is not available then the garage is revoked”. 5. Service publication, discovery and negotiation with C@WS In the previous sections, we showed that C@WS is particularly suitable for modelling different and typical aspects of SOC. We now present a dialect of C@WS (Section 5.1) equipped with mechanisms of concurrent constraint programming, which permits modelling the phases of dynamic service publication, discovery and negotiation. This way, we obtain a linguistic formalism capable of modelling all the phases of the life cycle of SOC applications (as we show in Section 5.2). 5.1. A C@WS’s dialect for concurrent constraint programming We describe here how we can define a dialect of C@WS exploiting the concurrent constraint programming paradigm to model Service Level Agreement (SLA) achievements. Technically, we take advantage of the fact that C@WS syntax and operational semantics are parametrically defined with respect to the set of values, the syntax of expressions that operate on values and, therefore, the definition of the pattern-matching function. We follow the approach put forward in cc-pi [43], a language that combines basic features of name-passing calculi with concurrent constraint programming [44]. Specifically, we show that constraints and operations on them can be smoothly incorporated in C@WS, and propose a disciplined way to model and manipulate multisets of constraints. This way, SLA requirements are expressed as constraints that can be dynamically generated and composed, and that can be used by the involved parties both for service publication and discovery (on the Web), and for the SLA negotiation process. Consistency of the set of constraints resulting from negotiation means that the agreement has been reached. Intuitively, a constraint is a relation among a specified set of variables which gives some information on the set of possible values that these variables may assume. Such information is usually not complete as a constraint may be satisfied by several assignments of values to the variables. For example, we can employ constraints such as \[ \text{cost} \geq 350 \\ \text{cost} = \text{bw} \cdot 0.05 \\ z = 1 / (1 + |x - y|) \] In practice, we do not take a definite standing on which of the many kind of constraints to use. From time to time, the appropriate kind of constraints to work with should be chosen depending on what one intends to model. Formally a constraint \( c \) is represented as a function \( c : (V \to D) \to \{\text{true}, \text{false}\} \), where \( V \) is the set of constraint variables (that, as explained in the sequel, is included in the set of C\(\ominus\)WS names), and \( D \) is the domain of interpretation of \( V \), i.e. the domain of values that the variables may assume. If we let \( \eta : V \to D \) be an assignment of domain elements to variables, then a constraint is a function that, given an assignment \( \eta \), returns a truth value indicating if the constraint is satisfied by \( \eta \). For instance, the assignment \{cost \mapsto 500\} satisfies the first constraint, while \{cost \mapsto 500, bw \mapsto 8000\} does not satisfy the second constraint, that is, instead, satisfied by \{cost \mapsto 400, bw \mapsto 8000\}. An assignment that satisfies a constraint is called a solution. The constraints we have presented are called \textit{crisp} in the literature, because they can only be satisfied or violated. In fact, we can also use more general constraints called \textit{soft constraints} [45]. These constraints, given an assignment for the variables, return an element of an arbitrary constraint semiring (\( c \)-semiring, [46]), namely a partially ordered set of ‘preference’ values equipped with two suitable operations for combination (\( \times \)) and comparison (\( + \)) of (tuples of) values and constraints. Formally, a \( c \)-semiring is an algebraic structure \((A, +, \times, 0, 1)\) such that: \( A \) is a set and \( 0, 1 \in A \); \( + \) is a binary operation on \( A \) that is commutative, associative, idempotent, \( 0 \) is its unit element and \( 1 \) is its absorbing element; \( \times \) is a binary operation on \( A \) that is commutative, associative, distributes over \( + \), \( 1 \) is its unit element and \( 0 \) is its absorbing element. Operation \( + \) induces a partial order \( \leq \) on \( A \) defined by \( a \leq b \) iff \( a + b = b \), which means that \( a \) is more constrained than \( b \). The minimal element is thus \( 0 \) and the maximal \( 1 \). For example, crisp constraints can be understood as soft constraints on the \( c \)-semiring \(\{\text{true}, \text{false}\}, \lor, \land, \text{false}, \text{true}\). The C\(\ominus\)WS dialect we work with in this section specializes expressions to also include \textit{constraints}, ranged over by \( c \), and \textit{constraint multisets}, ranged over by \( C \), and to be formed by using the following operators. - \textit{Consistency check}: predicate \( \text{isCons}(C) \) takes a constraint multiset \( C \) and holds true if \( C \) is consistent. Formally, \( \text{isCons}(\{c_1, \ldots, c_n\}) \) holds true if there exists an assignment \( \eta \) such that \( c_1 \eta \land \ldots \land c_n \eta \neq \text{false} \), i.e. if the combination of all constraints has at least a solution\(^4\). The predicate \( \text{isCons}(. ) \) is defined for crisp constraints. However, we can generalize its definition to soft constraints by requiring that it is satisfied if there exists an assignment \( \eta \) such that \( c_1 \eta \times \ldots \times c_n \eta \neq 0 \). - \textit{Entailment check}: predicate \( C \vdash c \) takes a constraint multiset \( C \) and a constraint \( c \) and holds true if \( c \) is entailed by \( C \). Formally, \( \{c_1, \ldots, c_n\} \vdash c \) holds true if for all assignments \( \eta \) it holds that \( c_1 \eta \land \ldots \land c_n \eta \leq_B c \eta \), where \( \leq_B \) is the partial ordering over booleans (i.e. \( b_1 \leq_B b_2 \) iff \( b_1 \lor b_2 = b_2 \)). Also this predicate can be generalized to soft constraints by requiring that \( \{c_1, \ldots, c_n\} \vdash c \) holds true if for all assignments \( \eta \) it holds that \( c_1 \eta \times \ldots \times c_n \eta \leq c \eta \). \(^4\)We do not consider here the well-studied problem of solving a constraint system. Among the many techniques exploited to this aim, we mention dynamic programming [47, 48] and branch and bound search [49]. \begin{table}[h] \centering \begin{tabular}{c c} \hline isCons($C \uplus \{c\}$) & $C \vdash c$ \\ \hline $\mathcal{M}(\langle c, x \rangle, C) = \{x \mapsto C\}$ & $\mathcal{M}(\langle c^+, x \rangle, C) = \{x \mapsto C\}$ \\ \hline \end{tabular} \caption{Pattern-matching function (additional rules)} \end{table} - **Retraction**: operation $C - c$ takes a constraint multiset $C$ and a constraint $c$ and returns the multiset $C \setminus \{c\}$ if $c \in C$, otherwise returns $C$. - **Multiset union**: binary operator $\uplus$ is the standard union operator between multisets. Since constraints and constraint multisets are expressions, they need to be evaluated. The (expression) evaluation function $\llbracket - \rrbracket$ acts on constraints and constraint multisets as the identity, except for constraints containing $C^\circ WS$ variables, for which the function is undefined. Therefore, evaluated constraints and constraint multisets are values that can be communicated by means of synchronization of invoke and receive activities and can replace variables by means of application of substitutions to terms. To efficiently implement the primitives of the concurrent constraint programming paradigm, we tailor the rules in Table 3 (Section 3.1) defining the pattern-matching function $\mathcal{M}(_, _)$. To deal with constraints and operations on them, by adding the rules in Table 13. We assume here that tuples can be arbitrarily nested. The original matching rules (reported in Table 3) are still valid and state that variables match any value (thus, e.g., $\mathcal{M}(x, C) = \{x \mapsto C\}$), two values match only if they are identical, and two tuples match if they have the same number of fields and corresponding fields do match. The new rules allow a two-field tuple to match a single value in two specific cases: a tuple $\langle c, x \rangle$ and a multiset of constraints $C$ do match if $C \uplus \{c\}$ is consistent, while a tuple $\langle c^+, x \rangle$ and a multiset of constraints $C$ do match if $c$ is entailed by $C$; in both cases, the substitution $\{x \mapsto C\}$ is returned. Notably, by applying the operator $\vdash$ to a constraint one can require an entailment check instead of a consistency check. The concurrent constraint computing model is based on a shared store of constraints that provides partial information about possible values that variables can assume. In $C^\circ WS$ the store of constraints is represented by the following service: $$store_C \triangleq [n] (n!(C) | * [x] n?_x(x). (p_s \cdot o_{get}!_x(x) | [y] p_s \cdot o_{set}?_y(y). n!(y)))$$ where $p_s$ is a distinguished partner, $o_{get}$ and $o_{set}$ are distinguished operations. Other services can interact with the store service in mutual exclusion, by acquiring the lock (and, at the same time, the stored value) with a receive along $p_s \cdot o_{get}$ and by releasing the lock (providing the new stored value) with an invoke along $p_s \cdot o_{set}$. Notably, local stores of constraints can be simply modelled by restricting the scope of the partner name $p_s$. The store is composed in parallel with the other services, which can act on it by performing operations for adding/removing constraints to/from the store (tell and retract, respectively), and for checking entailment/consistency of a constraint by/with the store (ask and check, respectively). These four operations can be rendered in $C^\circ WS$ as follows: $$\langle tell \ c.s \rangle = [n] (n!(c) | [y] n?_y(y). [x] p_s \cdot o_{get}?_x(\langle y, x \rangle). (\| p_s \cdot o_{set}!(x \uplus \{y\}) \| | \langle s \rangle))$$ $$\langle ask \ c.s \rangle = [n] (n!(c^+) | [y] n?_y(y). [x] p_s \cdot o_{get}?_x(\langle y, x \rangle). (\| p_s \cdot o_{set}!(x) \| | \langle s \rangle))$$ $$\langle check \ c.s \rangle = [n] (n!(c) | [y] n?_y(y). [x] p_s \cdot o_{get}?_x(\langle y, x \rangle). (\| p_s \cdot o_{set}!(x) \| | \langle s \rangle))$$ $$\langle retract \ c.s \rangle = [n] (n!(c) | [y] n?_y(y). [x] p_s \cdot o_{get}?_x(x). (\| p_s \cdot o_{set}!(x - y) \| | \langle s \rangle))$$ where \( n \) is fresh. Essentially, each operation is a term that first takes the store of constraints (thus acquiring the lock so that other services cannot concurrently interact with the store) and then returns the (possibly) modified store (thus releasing the lock). Since the invoke activities \( n! \langle c \rangle \) and \( n! \langle c' \rangle \) can be performed only if \( \|c\| \) is defined, i.e. if \( c \) does not contain C\(\ominus\)WS variables, the store can only contain evaluated constraints. Availability of the store is guaranteed by the fact that, once the store and the lock have been acquired, the activities reintroducing the store and releasing the lock are protected from the effect of kill activities. This disciplined use of the store permits to preserve its consistency. Notably, the matching rules in Table 13 are essential for faithfully modelling the semantics of the original operations. Also notice that, in the definition of tell, the expression \( x \uplus \{y\} \) is well-defined, since the variable \( x \) is replaced by a multiset of constraints while \( y \) by a single constraint. While tell and ask are the classical concurrent constraint programming primitives, operations check and retract are borrowed from [43]. In particular, operation retract is debatable since its adoption prevents the store of constraints to be ‘monotonically’ refined. In fact, in concurrent constraint programming a computation step does not change the value of a variable, but may rule out certain values that were previously possible; therefore, the set of possible values for the variable is contained in the set of possible values at any prior step. This monotonic evolution of the store during computations permits to define the result of a computation as the least upper bound of all the stores occurring along the computation and provides concurrent constraint languages with a simple denotational semantics in which programs are identified to closure operators on the semi-lattice of constraints [50]. Therefore, if one wants to exploit some of the properties of concurrent constraint programming that require monotonicity, he must consider the fragment of C\(\ominus\)WS without retract. On the other hand, in the context of dynamic service discovery and negotiation, the use of operation retract enables modelling many frequent situations where it is necessary to remove a constraint from the store for, e.g., weakening a request. To avoid interference between communication and operations on the store, we do not allow constraints in the store to contain variables, thus they cannot change due to application of substitutions generated by communication. Indeed, suppose constraints in the store may contain variables and consider the following example: \[ [x] \cdot (\text{store}_0 \mid \text{tell}(x \leq 5), (n! \langle 6 \rangle \mid n? \langle x \rangle)) \] After action tell has added \( x \leq 5 \) to the store, communication along the endpoint \( n \) can modify the constraint in \( 6 \leq 5 \). This way, the communication can make the store inconsistent. This means that the write-once variables of C\(\ominus\)WS are not suitable for modelling constraint variables. Therefore, as we stated before, we do not allow constraints in the store to contain variables. Instead, they can use specific names, that we call constraint variables and, for the sake of presentation, write as \( x, y, \ldots \) (i.e. in the sans serif style). Indeed, names are not affected by expression evaluation (i.e. \( \|x\| = x \)) and by substitution application (i.e. \( x \cdot \sigma = x \)). Moreover, names can be delimited, thus allowing us to model local constraints. Notice however that constraints occurring as arguments of operations may contain variables so that we can specify constraints that will be dynamically determined. E.g., we can write tell \( (\text{cost} \geq x_{\text{min\_cost}})_S \); since \( \| \text{cost} \geq x_{\text{min\_cost}} \| \) is undefined, this operation is blocked until variable \( x_{\text{min\_cost}} \) is substituted by a value. Besides ask, tell, retract and check, inter-service communication can be used to implement many protocols allowing two parties to generate new constraints. For instance, in [43], service synchronization works like two global ask and tell constructs: as a result of the synchronization between the output \( x(y) \) and the input \( x(y') \) the new constraint \( y = y' \) is added to the store. Therefore, synchronization allows local constraints (i.e. constraints with restricted names) to interact, thus establishing an SLA between the two parties, and (possibly) to become globally available. Differently, CWS does not allow communication to directly generate new constraints: e.g., an invoke \( p \cdot o! \langle x \rangle \) and a receive \( p \cdot o? \langle y \rangle \) cannot synchronize, because \( M(y, x) \) does not hold. Thus, to create constraints of the form \( x = y \), where each of \( x \) and \( y \) is initially local to only one party, we can use the standard CWS communication mechanism together with operation tell. For example, the following term \[ store_C \mid p \cdot o! \langle x \rangle \mid [z] p \cdot o? \langle z \rangle . \text{tell} \langle z = y \rangle . s \] (2) for \( z \) fresh in \( s \), adds to the store the constraint \( x = y \), if it is consistent with \( C \). This protocol is simple and divergence-free, but it may introduce deadlocked states in the terms, because the communication along endpoint \( p \cdot o \) takes place before the consistency check (performed by operation tell). For other protocols that permit establishing new constraints by overtaking this problem, we refer the interested reader to [23]. Anyway, since the problem mentioned above does not occur in the specification in Section 5.2, in the sequel we implicitly rely on protocol (2). ### 5.2. Automatic discovery and negotiation in the automotive case study We show here how our framework can be used to integrate publication, discovery and negotiation into the automotive case study presented in Section 2.2 and specified in COWS in Section 4. Initially, each on road service (e.g. garages, tow trucks, ...) has to publish its service description on a service registry. For example, assume that a garage service description consists of: a string identifying the kind of provided service, the provider’s partner name, and a constraint that defines the garage location. By assuming that the registry provides the operation \( o_{pub} \) through the partner name \( p_{reg} \), a garage service can request the publication of its description as follows: \[ p_{reg} \cdot o_{pub}! \langle "garage", p_{garage}, \text{gps} = (4348.1143N, 1114.7206E) \rangle \] where \( \text{gps} \) is a constraint variable. The service registry is defined as \[ [o_{DB}] (* [x_{type}, x_p, x_c] p_{reg} \cdot o_{pub}? \langle x_{type}, x_p, x_c \rangle . p_{reg} \cdot o_{DB}! \langle x_{type}, x_p, x_c \rangle \mid R^{\text{search}}) \] For each publication request received along the endpoint \( p_{reg} \cdot o_{pub} \) from a provider service, the registry service outputs a service description along the private endpoint \( p_{reg} \cdot o_{DB} \). The parallel composition of all these outputs represents the database of the registry. The subservice \( R^{\text{search}} \), serving the searching requests, is defined as \[ R^{\text{search}} \triangleq * [x_{type}, x_{client}, x_c, o_{addToList}, o_{askList}] p_{reg} \cdot o_{search}? \langle x_{type}, x_{client}, x_c \rangle . [p_s] (\text{store}_0 \mid \text{tell} x_c . R' \mid \text{List}) \] \[ R' \triangleq [k] (* [x_p, x_{const}] p_{reg} \cdot o_{DB}?! \langle x_{type}, x_p, x_{const} \rangle . (\parallel p_{reg} \cdot o_{pub}! \langle x_{type}, x_p, x_{const} \rangle \| \mid \text{check} x_{const} . p_{reg} \cdot o_{addToList}! \langle x_p \rangle ) \mid \odot_\delta . (\text{kill}(k) \mid \| [x_{list}] p_{reg} \cdot o_{askList}? \langle x_{list} \rangle . x_{client} \cdot o_{resp}! \langle x_{list} \rangle \|)) \] When a searching request is received along \( p_{reg} \cdot o_{search} \), the registry service initializes a new local store (delimitation \( [p_s] \) makes \( \text{store}_0 \) inaccessible outside of service \( R^{\text{search}} \)) by adding the constraint within the query message. Then, it cyclically reads a description (whose first field is the string specified by the client) from the internal database, checks if the provider constraints are consistent with the store and, in case of success, adds the provider’s partner name to a list (by exploiting an internal service List, that provides operations o_addToList and o_askList). After δ time units from the initialization of the local store, the loop is terminated by executing a kill activity and the current list of providers for service type x_type is sent to the client. Notably, reading a description in the database, in this case, consists of an input along p_reg • opB followed by an output along p_reg • opB; this way we are guaranteed that, after being consumed, the description is correctly added to the database. It is worth noticing that, for the sake of simplicity, service descriptions are non-deterministically retrieved, thus the same provider can occur in the returned list many times. This behaviour could be avoided by refining the specification, e.g. by tagging each service description with an index (stored in an additional field) that is then exploited to read the descriptions in an ordered way. After the user’s car breaks down and Orchestrator is triggered, the service Discovery of the in-vehicle platform will receive from Orchestrator a request containing the GPS data of the car, that it stores in x_loc, and a string identifying the kind of the required services (see the specification in Section 4). By exploiting the latter information, it will know that it has to search a garage, a tow truck and a rental car service. For example, the component taking care of discovering a garage service can be \[ p_{reg} \cdot o_{search}! \langle 'garage' , p_{car}, dist(x_{loc}, gps) < 20 \rangle \mid [x_{garageList}] p_{car} \cdot o_{resp}? \langle x_{garageList} \rangle \] where the constraint \( dist(x_{loc}, gps) < 20 \) means that the required garages must be less than 20 km far from the stranded car’s actual location. Once the discovery phase terminates and Reasoner communicates the best garage service to Orchestrator, the latter and the selected garage engage in a negotiation phase in order to sign an SLA. First, Orchestrator invokes the operation OrderGar provided by the selected garage (see the term OrderGarageAndTowTruck in Section 4); then, it starts the negotiation by performing an operation tell that adds Orchestrator’s local constraints (i.e. constraints with restricted constraint variables) to the shared global store; finally, it synchronizes with the garage service, by invoking osync, for sharing its local constraints with it. \[ \begin{align*} &[cost, duration] \\ &tell ((cost < 1500 \land duration < 48) \lor (cost < 800 \land duration \geq 48)). \\ &(x_{garage} \cdot o_{sync}!(cost, duration) \\ &\quad \mid p_{car} \cdot O_{garageOK}? \langle x_{gps}, x_{garageInfo} \rangle . \cdots + p_{car} \cdot O_{garageFail}? \langle \rangle . \cdots) \end{align*} \] In our example, the constraints state that for a repair in less than two days (i.e. 48 hours) the driver is disposed to spend up to 1500 Euros, otherwise he is ready to spend less than 800 Euros. After the synchronization with Orchestrator, the selected garage service tries to impose its first-rate constraint \( c = ((cost' > 2000 \land 6 < duration' < 24) \lor (cost' > 1500 \land duration' \geq 24)) \) and, if it fails to reach an agreement within δ’ time units, weakens the requirements and retries with the constraint \( c' = ((cost' > 1700 \land 6 < duration' < 24) \lor (cost' > 1200 \land duration' \geq 24)) \). Both constraints are specifically generated by the garage service for the occurred engine failure, by exploiting the transmitted diagnostic data. After δ’’ time units, if also the second attempt fails, it gives up the negotiation. This negotiation task is modelled as follows: \[ \begin{align*} &[x_{cost}, x_{duration}, cost', duration'] \\ &p_{garage} \cdot o_{sync}? \langle x_{cost}, x_{duration} \rangle . tell (x_{cost} = cost' \land x_{duration} = duration'). \\ &\quad (\text{tell } c . x_{cust} \cdot O_{garageOK}! \langle garageGPS, garageInfo \rangle \\ &\quad + \ominus_{\delta'} . (\text{tell } c' . x_{cust} \cdot O_{garageOK}! \langle garageGPS, garageInfo \rangle \\ &\quad + \ominus_{\delta''} . x_{cust} \cdot O_{garageFail}! \langle \rangle )) \end{align*} \] Notably, operations tell could not be directly used as guards for the choice operator. Thus, a term like tell c. s + ⊕ e. s' should be considered as an abbreviation for \[ [p, q, o] \left( \text{check} \ c. (p \cdot o! \langle \rangle | q \cdot o? \langle \rangle). \text{tell} \ c. s \ | \ ⊕ e. s' + p \cdot o? \langle \rangle. q \cdot o! \langle \rangle \right) \] Intuitively, if the constraint \( c \) is consistent with the store, the timer can be stopped (i.e. communication along \( p \cdot o \) makes a choice and removes the wait activity); afterward, the constraint can be added to the store, provided that other interactions that took place in the meantime do not lead to inconsistency (which, anyway, is not the case in our scenario). Otherwise, if the timeout expires, the constraint cannot be added to the store. 6. Related work We have already pointed out, mainly in Section 3.1.1, main relationships of C\(\ominus\)WS with other process calculi. By summing up, C\(\ominus\)WS borrows, e.g., global scoping and non-binding input from update calculus [33] and fusion calculus [34], distinction between variables and values from value-passing CCS [51], Applied \(\pi\)-calculus [52] and Distributed \(\pi\)-calculus [53], pattern-matching from KLAIM [54], prioritised activities from variants of CCS with priority [55, 56, 57], and forced termination and protection from StAC [58]. Many works put forward enrichments of some well-known process calculus with constructs inspired by those of WS-BPEL. Most of them deal with issues of web transactions such as interruptible processes, failure handlers and time. This is, for example, the case of [4, 5, 59, 60] that present timed and untimed extensions of the \(\pi\)-calculus, called web\(\pi\)r and web\(\pi_{ss}\), tailored to study a simplified version of the scope construct of WS-BPEL. Other proposals on the formalization of flow compensation are [61, 62] that give a more compact and closer description of the Sagas mechanism [32] for dealing with long running transactions, while some other works [4, 60] have concentrated on modelling web transactions and on studying their properties in programming languages based on the \(\pi\)-calculus. In contrast, C\(\ominus\)WS aims at dealing at once with many different and typical aspects of SOC, thus modelling an expressive subset of WS-BPEL rather than only focussing on a few specific constructs. The formalism closest to C\(\ominus\)WS is ws-calculus [7], which has been introduced to formalize the semantics of WS-BPEL. C\(\ominus\)WS represents a more foundational formalism than ws-calculus in that it does not rely on explicit notions of location and state, it is more manageable (e.g. has a simpler operational semantics) and has, at least, equally expressive power (as the encoding of ws-calculus in COWS [21, Section 5.1.3] shows). Moreover, C\(\ominus\)WS is equipped with timed constructs while ws-calculus is not. For modelling time and timeouts, we have drawn again our inspiration from the rich literature on timed process calculi (see, e.g., [63, 64] for a survey). In C\(\ominus\)WS, basic actions are durationless, i.e. instantaneous, and the passing of time is modelled by using explicit actions, like in TCCS [65]. Moreover, actions execution is lazy, i.e. can be delayed arbitrary long in favour of passing of time, like in ITCCS [66]. The correlation mechanism was first exploited in [67], that, however, only considers interaction among different instances of a single business process. Instead, to connect the interaction protocols of clients and of the respective service instances, a large strand of work (among which we mention [68, 69, 70, 14, 71, 72, 73]) relies on the explicit modelling of interaction sessions and their dynamic creation (that exploits the mechanism of private names of \(\pi\)-calculus). Sessions are not explicitly modelled in C\(\ominus\)WS: they can be identified by tracing all those exchanged messages that are correlated each other through their same contents (as in [12]). We believe that the mechanism based on correlation sets, that exploits business data and communication protocol headers to correlate different interactions, is more robust and fits the loosely coupled world of Web Services better than that based on explicit session references. It is not a case that also WS-BPEL uses correlation sets. Another body of work has been devoted to study mechanisms for comparing global descriptions (i.e. choreographies) and local descriptions (i.e. orchestrations) of a same system. Means to check conformance of these different views have been defined in [74, 10] and, by relying on session types, in [75]. C\textsuperscript{\textregistered}WS, instead, only considers service orchestration and focuses on modelling the dynamic behaviour of services without the limitations possibly introduced by a layer of choreography. Regarding QoS requirement specifications and SLA achievements, most of the proposals in the literature result from the extension of some well-known process calculus with constructs to describe QoS requirements. This is, for example, the case of cc-pi [43], a calculus that generalises the explicit name ‘fusions’ of the pi-F calculus [76] to ‘named constraints’, namely constraints defined on enriched c-semiring structures. cc-pi, as well as C\textsuperscript{\textregistered}WS, combines basic features of name-passing calculi with those of concurrent constraint programming, firstly introduced in [44], and its soft variant [45]. However, rather than on fusions of names, C\textsuperscript{\textregistered}WS relies on substitutions of variables with values and can thus express also soft constraints by exploiting the simpler notion of c-semiring. Moreover, C\textsuperscript{\textregistered}WS permits defining local stores of constraints while cc-pi processes necessarily share one global store. [77] introduces another formalism, namely nmsccp, for soft concurrent constraint programming that permits the nonmonotonic evolution of the store of constraints. Besides the retract operation also used in C\textsuperscript{\textregistered}WS, nmsccp provides an update operation, to relax some constraints of the store dealing with certain variables while adding a constraint, and a nask operation, to test if a constraint is not entailed by the store. If and how the latter two operations can be rendered in the variant of C\textsuperscript{\textregistered}WS presented in Section 5.1 is left for future investigation. A similar approach to SLAs negotiation is proposed in [78], although it is based on fuzzy sets instead of constraints and relies on three different languages, one for client requests, one for provider descriptions and one for contracts creation and revocation. SLA compliance has been also the focus of KoS [79] and K\textsubscript{A}os [80], two calculi designed for modelling network aware applications with located services and mobility. In both cases, QoS parameters are associated to connections and nodes of nets, and operations have a QoS value; the operational semantics ensures that systems evolve according to SLAs. All the mentioned proposals aim at specifying and concluding SLAs, while C\textsuperscript{\textregistered}WS permits also modelling other service-oriented aspects, such as e.g. service instances and interactions, fault and compensation handling, and dynamic service publication, discovery and orchestration. Integrations of the concurrent constraint paradigm with process calculi have also been used to define foundational formalisms for computer music languages. This is the case of the $\pi^+$-calculus [81], an extension of the (polyadic) $\pi$-calculus with agents that can interact with a store of constraints by performing ‘tell’ and ‘ask’ actions. Differently from C\textsuperscript{\textregistered}WS, the store of constraints is not a term of the calculus, indeed the operational semantics of $\pi^+$-calculus is defined over configurations consisting of pairs of an agent and a store, and local stores are not supported. Some other works, differently from C\textsuperscript{\textregistered}WS, exploit static service discovery mechanisms. For example, [82] introduces an extension of the $\lambda$-calculus with primitive constructs for call-by-contract invocation for which a completely static approach for regulating secure service composition has been devised. In particular, an automatic machinery, based on a type system and a model-checking technique, has been defined to construct a viable plan for the execution of services belonging to a given orchestration. Non-functional aspects are included and enforced by means of a runtime security monitor. In [83], user’s requests and compositions of web services are statically modelled via constraints. Finally, the calculi of contracts of [84] represent a more abstract approach for statically checking compliance between the client requirements and the service functionalities. A contract defines the possible flows of interactions of a service, but does not take into account non-functional properties and, thus, cannot be used for specifying and negotiating SLAs. Up to here, we have discussed the relationship between C\textcircled{O}WS and other formal languages for specifying SOC applications and their main features. We conclude now this section with a discussion about the relationship between C\textcircled{O}WS and WS-BPEL, namely the SOC technology that more than any other has influenced C\textcircled{O}WS’s design. On the one hand, C\textcircled{O}WS distills out of WS-BPEL those features that are, in our opinion, absolutely necessary to formally define the basic elements and mechanisms underlying the SOC paradigm. Indeed, C\textcircled{O}WS directly borrows from WS-BPEL the notions of partner and operation, the communication primitives (for invoking an operation offered by a service and waiting for an invocation to arrive), the related mechanism for message correlation, and the timed activity (for delaying the execution for some amount of time). C\textcircled{O}WS also retains the WS-BPEL’s constructs \textit{flow}, to execute activities in parallel, and \textit{pick}, to execute activities selectively, which corresponds to the C\textcircled{O}WS’s parallel composition and choice operators, respectively. On the other hand, while the set of WS-BPEL constructs is not intended to be a minimal one, C\textcircled{O}WS aims at being a foundational model and, thus, at keeping its semantics rigorous but still manageable and not strongly tight to web services’ current technology. Therefore, some WS-BPEL constructs do not have a precise counterpart in C\textcircled{O}WS, rather they are expressed in terms of more primitive operators. For example, fault and compensation handlers are rendered in C\textcircled{O}WS, as shown in Section 4, by means of the primitives dealing with termination, i.e. kill, protection and delimitation. Indeed, when a fault occurs during the execution of a given activity, the kill primitive permits to immediately interrupt the currently running activities under the scope of the fault (identified by the delimitation operator), while the protection operator is used to avoid involving any fault/compensation handling behaviour in the forced termination. Similarly, service instantiation is rendered in C\textcircled{O}WS through the replication operator, while shared states among service instances through variable delimitation. Finally, as shown in [8], the standard imperative constructs (assignment, while, if-then-else, etc.) can be easily expressed in C\textcircled{O}WS, as well as the remaining features of WS-BPEL, like e.g. the synchronisation dependencies within flow activities. 7. Concluding remarks and future work This paper provides a formal account of the SOC paradigm and related technologies. The introduction of C\textcircled{O}WS as a formalism specifically devised for modelling service-oriented applications is indeed an important step towards the comprehension of the mechanisms underlying the SOC paradigm. On the one hand, since the design of the calculus has been influenced by the principles underlying WS-BPEL, C\textcircled{O}WS permits modelling in a natural way different and typical aspects of (web) services technologies, such as multiple start activities, receive conflicts, timed constructs, delivering of correlated messages, service instances and interactions among them. On the other hand, C\textcircled{O}WS is a foundational formalism, not specifically tight to web services’ current technology, and borrows many constructs from well-known process calculi, as e.g. $\pi$-calculus, update calculus, StAC$_i$, and L$\pi$. We have illustrated syntax, operational semantics and pragmatics of the calculus by means of a large case study from the automotive domain and a number of more specific examples drawn from it. We have also introduced a dialect of the language that turned out to be capable of modelling all the phases of the life cycle of service-oriented applications, such as publication, discovery, negotiation, orchestration, deployment, reconfiguration and execution. As a further evidence of the quality of the design of our formalism, since its definition a number of methods and tools have been devised to analyse COWS specifications. We want to mention here the stochastic extension and the BPMN-based notation defined in [9, 16] to enable quantitative reasoning on service behaviours, the type system introduced in [17] to check confidentiality properties, the logic and model checker presented in [18] and exploited in [85] to express and check functional properties of services, the bisimulation-based observational semantics defined in [19] to check interchangeability of services and conformance against service specifications, and the symbolic characterisation of the operational semantics of COWS presented in [20] to avoid infinite representations of COWS terms due to the value-passing nature of communication. An overview of most of the tools mentioned above and the classes of properties that can be analysed by using them can be found in [21]. For the time being, the above analysis tools are applicable to the time-free fragment COWS. We do not envisage any major issue in tailoring them to COWS, but leave this extension as a future work. To complete our programme to lay rigorous methodological foundations and provide supporting tools for specification, validation and development of SOC applications, we plan in the near future to develop a prototype implementation of COWS possibly enriched with standard linguistic constructs supporting real application development. This would permit to assess COWS practical usability and to shorten the gap between theory and practice. The implementation of a language based on a process calculus typically consists of a run-time system (a sort of abstract machine) implemented in a high level language like Java, and of a compiler that, given a program written in the programming language based on the calculus, produces code that uses the run-time system above. In the development of our language, we intend to follow a similar approach. In this regard, the major issue we envisage is the integration of our framework with the current standard technologies supporting web services interaction, such as WSDL and SOAP. In particular, the code generated from COWS services should be able to invoke operations provided by available web services and, in its turn, to expose its functionalities as a standard web service. Some implementations of service-oriented calculi that could serve as a guide for our work are as follows: JCaSPiS [86], a Java implementation of the calculus CaSPiS [14] based on a generic framework that provides recurrent mechanisms for network applications; BliteC [87], a Java tool that accepts as an input a specification written in Blite [40], a formal orchestration language inspired to but simpler than WS-BPEL, and returns the corresponding WS-BPEL program together with the associated WSDL and deployment descriptor files; JOLIE [88], an interpreter written in Java for a programming language designed for web service orchestration and based on SOCK [12]; JSCL [89], a coordination middleware for services based on the event notification paradigm of Signal Calculus [90]; and PiDuce [38], a distributed run-time environment devised for experimenting web services technologies that implements a variant of asynchronous $\pi$-calculus extended with native XML values, datatypes and patterns. **Acknowledgments.** We thank the anonymous reviewers for their useful comments. We also thank Alessandro Lapadula for his fundamental contribution to the definition of COWS. References [1] L. Meredith, S. Bjorg, Contracts and types, Communications of the ACM 46 (2003) 41–47. [2] F. van Breugel, M. Koshkina, Models and verification of BPEL, Technical Report, Department of Computer Science and Engineering, York University, 2006. Available at http://www.cse.yorku.ca/~franck/research/drafts/tutorial.pdf. [3] L. Bocchi, C. Laneve, G. Zavattaro, A Calculus for Long-Running Transactions, in: FMOODS, volume 2884 of LNCS, Springer, 2003, pp. 124–138. [4] C. Laneve, G. Zavattaro, Foundations of Web Transactions, in: FoSSaCS, volume 3441 of LNCS, Springer, 2005, pp. 282–298. [5] C. Laneve, G. Zavattaro, web-pi at Work, in: TGC, volume 3705 of LNCS, Springer, 2005, pp. 182–194. [6] M. Butler, C. Hoare, C. Ferreira, A Trace Semantics for Long-Running Transactions, in: 25 Years Communicating Sequential Processes, volume 3525 of LNCS, Springer, 2005, pp. 133–150. [7] A. Lapadula, R. Pugliese, F. Tiezzi, A WSDL-based type system for WS-BPEL, in: COORDINATION, volume 4038 of LNCS, Springer, 2006, pp. 145–163. [8] A. Lapadula, R. Pugliese, F. Tiezzi, A Calculus for Orchestration of Web Services, in: ESOP, volume 4421 of LNCS, Springer, 2007, pp. 33–47. [9] D. Prandi, P. Quaglia, Stochastic COWS, in: ICSOC, volume 4749 of LNCS, Springer, 2007, pp. 245–256. [10] N. Busi, R. Gorrieri, C. Guidi, R. Lucchi, G. Zavattaro, Choreography and orchestration conformance for system design, in: COORDINATION, volume 4038 of LNCS, Springer, 2006, pp. 63–81. [11] C. Laneve, V. Padovani, Smooth Orchestrators, in: FoSSaCS, volume 3921 of LNCS, Springer, 2006, pp. 32–46. [12] C. Guidi, R. Lucchi, R. Gorrieri, N. Busi, G. Zavattaro, SOCK: A Calculus for Service Oriented Computing, in: ICSOC, volume 4294 of LNCS, Springer, 2006, pp. 327–338. [13] M. Bordele, R. Bruni, L. Martes, R. De Nicola, C. Laneve, M. Loreti, F. Martins, U. Montanari, A. Ravara, D. Sangiorgi, V. Vasconcelos, G. Zavattaro, SCC: a Service Centered Calculus, in: WS-FM, volume 4184 of LNCS, Springer, 2006, pp. 38–57. [14] M. Bordele, R. Bruni, R. De Nicola, M. Loreti, Sessions and Pipelines for Structured Service Programming, in: FMOODS, volume 5051 of LNCS, Springer, 2008, pp. 19–38. [15] OASIS WSBPPEL TC, Web Services Business Process Execution Language Version 2.0, Technical Report, OASIS, 2007. Available at http://docs.oasis-open.org/wsbpel/2.0/OS/wsbpel-v2.0-OS.html. [16] D. Prandi, P. Quaglia, N. Zannone, Formal analysis of BPMN via a translation into COWS, in: COORDINATION, volume 5052 of LNCS, Springer, 2008, pp. 249–263. [17] A. Lapadula, R. Pugliese, F. Tiezzi, Regulating data exchange in service oriented applications, in: FSEN, volume 4767 of LNCS, Springer, 2007, pp. 223–239. [18] A. Fantechi, S. Gnesi, A. Lapadula, F. Mazzanti, R. Pugliese, F. Tiezzi, A Logical Verification Methodology for Service-Oriented Computing, ACM Transactions on Software Engineering and Methodology (2011). To appear. [19] R. Pugliese, F. Tiezzi, N. Yoshida, On observing dynamic prioritised actions in SOC, in: ICALP, volume 5556 of LNCS, Springer, 2009, p. 558570. [20] R. Pugliese, F. Tiezzi, N. Yoshida, A Symbolic Semantics for a Calculus for Service-Oriented Computing, in: PLACES, volume 241 of ENTCS, Elsevier, 2009, pp. 135–164. [21] F. Tiezzi, Specification and Analysis of Service-Oriented Applications, PhD Thesis in Computer Science, Dipartimento di Sistemi e Informatica, Università degli Studi di Firenze, 2009. Available at http://rap.dsi.unifi.it/cows. [22] A. Lapadula, R. Pugliese, F. Tiezzi, C©WS: A timed service-oriented calculus, in: ICTAC, volume 4711 of LNCS, Springer, 2007, pp. 275–290. [23] A. Lapadula, R. Pugliese, F. Tiezzi, Service discovery and negotiation with COWS, in: WWV, volume 200(3) of ENTCS, Elsevier, 2008, pp. 133–154. [24] A. Brown, S. Johnston, K. Kelly, Using Service-Oriented Architecture and Component-Based Development to Build Web Service Applications, Technical Report, Rational Software Corporation, 2003. [25] D. Box, D. Ehnebuske, G. Kakivaya, A. Layman, N. Mendelsohn, H. Nielsen, S. Thatte, D. Winer, Simple Object Access Protocol (SOAP) 1.2, W3C recommendation, 2003. Available at http://www.w3.org/TR/SOAP/. [26] E. Christensen, F. Curbera, G. Meredith, S. Weerawarana, Web Services Description Language (WSDL) 1.1, Technical Report, W3C, 2001. Available at http://www.w3.org/TR/wsdl/. [27] UDDI Spec TC, UDDI Specification Technical Committee Draft, Technical Report, OASIS, 2004. Available at http://uddi.org/pubs/uddi_v3.htm/. [28] C. Peltz, Web Services Orchestration and Choreography, Computer 36 (2003) 46–52. [29] N. Koch, Automotive Case Study: UML Specification of On Road Assistance Scenario, Technical Report 1, FAST GmbH, 2007. Available at http://rap.dsi.unifi.it/sensoriasite/files/FAST_report_1_2007_ACS_UML.pdf. [30] SENSORIA, Software engineering for service-oriented overlay computers, 2005-2010. Web site: http://www.sensoria-ist.eu/. [31] P. Mayer, A. Schroeder, N. Koch, A Model-Driven Approach to Service Orchestration, in: SCC, volume 2, IEEE Computer Society Press, 2008, pp. 533–536. [32] H. Garcia-Molina, K. Salem, Sagas, in: SIGMOD, ACM Press, 1987, pp. 249–259. [33] J. Parrow, B. Victor, The update calculus, in: AMAST, volume 1349 of LNCS, Springer, 1997, pp. 409–423. [34] J. Parrow, B. Victor, The fusion calculus: Expressiveness and symmetry in mobile processes, in: LICS, IEEE Computer Society Press, 1998, pp. 176–185. [35] M. Carbone, S. Maffeis, On the expressive power of polyadic synchronisation in $\pi$-calculus, Nordic Journal of Computing 10 (2003) 70–98. [36] M. Merro, D. Sangiorgi, On asynchrony in name-passing calculi, Mathematical Structures in Computer Science 14 (2004) 715–767. [37] P. Gardner, C. Laneve, L. Wischik, Linear Forwarders, in: CONCUR, volume 2761 of LNCS, Springer, 2003, pp. 408–422. [38] S. Carpineti, C. Laneve, L. Padovani, PiDuce - a project for experimenting Web services technologies, Science of Computer Programming 74 (2009) 777–811. [39] R. Amadio, I. Castellani, D. Sangiorgi, On Bisimulations for the Asynchronous pi-Calculus, Theoretical Computer Science 195 (1998) 291–324. [40] A. Lapadula, R. Pugliese, F. Tiezzi, A formal account of WS-BPEL, in: COORDINATION, volume 5052 of LNCS, Springer, 2008, pp. 199–215. [41] R. van Glabbeek, On Specifying Timeouts, in: APC, volume 162 of ENTCS, Elsevier, 2006, pp. 173–175. [42] R. Pugliese, F. Tiezzi, A COWS Specification of an Automotive Case Study, Technical Report, Dipartimento di Sistemi e Informatica, Università degli Studi di Firenze, 2011. Available at http://rap.dsi.unifi.it/cows/automotiveCSinCOWSforJAL.pdf. [43] M. Buscemi, U. Montanari, CC-PI: A Constraint-Based Language for Specifying Service Level Agreements, in: ESOP, volume 4421 of LNCS, Springer, 2007, pp. 18–32. [44] V. Saraswat, M. Rinard, Concurrent Constraint Programming, in: POPL, ACM Press, 1990, pp. 232–245. [45] S. Bistarelli, U. Montanari, F. Rossi, Soft concurrent constraint programming, ACM Transactions on Computational Logic 7 (2006) 563–589. [46] S. Bistarelli, U. Montanari, F. Rossi, Semiring-based constraint satisfaction and optimization, Journal of the ACM 44 (1997) 201–236. [47] U. Montanari, F. Rossi, Constraint Relaxation may be Perfect, Artificial Intelligence 48 (1991) 143–170. [48] S. Bistarelli, Semirings for Soft Constraint Solving and Programming, LNCS, Springer, 2004. [49] M. Wirsing, G. Denker, C. Talcott, A. Poggio, L. Brieseemeister, A Rewriting Logic Framework for Soft Constraints, in: WRLA, volume 176(4) of ENTCS, Elsevier, 2007, pp. 181–197. [50] V. A. Saraswat, M. C. Rinard, P. Panangaden, Semantic Foundations of Concurrent Constraint Programming, in: POPL, ACM Press, 1991, pp. 333–352. [51] R. Milner, Communication and concurrency, Prentice-Hall, 1989. [52] M. Abadi, C. Fournet, Mobile values, new names, and secure communication, in: POPL, ACM Press, 2001, pp. 104–115. [53] M. Hennessey, J. Riely, Resource access control in systems of mobile agents, Information and Computation 173 (2002) 82–120. [54] R. De Nicola, G. Ferrari, R. Pugliese, KLAIM: A Kernel Language for Agents Interaction and Mobility, Transactions on Software Engineering 24 (1998) 315–330. [55] R. Cleaveland, G. Lüttgen, V. Natarajan, Priorities in process algebra, Handbook of Process Algebra, chapter 12 (2001) 391–424. [56] J. Camilleri, G. Winskel, CCS with Priority Choice, Information and Computation 116 (1995) 26–37. [57] I. Phillips, CCS with priority guards, Journal of Logic and Algebraic Programming 75 (2008) 139–165. [58] M. Butler, C. Ferreira, An Operational Semantics for SiAC, a Language for Modelling Long-Running Business Transactions, in: COORDINATION, volume 2949 of LNCS, Springer, 2004, pp. 87–104. [59] M. Mazzara, I. Lanese, Towards a Unifying Theory for Web Services Composition, in: WS-FM, volume 4184 of LNCS, Springer, 2006, pp. 257–272. [60] M. Mazzara, R. Lucchi, A pi-calculus based semantics for WS-BPEL, Journal of Logic and Algebraic Programming 70 (2006) 96–118. [61] R. Bruni, H. Melgratti, U. Montanari, Theoretical foundations for compensations in flow composition languages, in: POPL, ACM Press, 2005, pp. 209–220. [62] R. Bruni, M. Butler, C. Ferreira, T. Hoare, H. Melgratti, U. Montanari, Comparing two approaches to compensable flow composition, in: CONCUR, volume 3653 of LNCS, Springer, 2005, pp. 383–397. [63] F. Corradini, D. D’Ortenzio, P. Inverardi, On the Relationships among four Timed Process Algebras, Fundamenta [64] X. Nicollin, J. Sifakis, An Overview and Synthesis on Timed Process Algebras, in: CAV, volume 575 of LNCS, Springer, 1991, pp. 376–398. [65] F. Moller, C. Tofts, A Temporal Calculus of Communicating Systems, in: CONCUR, volume 458 of LNCS, Springer, 1990, pp. 401–415. [66] F. Moller, C. Tofts, Relating Processes With Respect to Speed, in: CONCUR, volume 527 of LNCS, Springer, 1991, pp. 424–438. [67] M. Viroli, Towards a Formal Foundation to Orchestration Languages, in: WS-FM, volume 105 of ENTCS, Elsevier, 2004, pp. 51–71. [68] K. Honda, V. T. Vasconcelos, M. Kubo, Language Primitives and Type Discipline for Structured Communication-Based Programming, in: ESOP, volume 1381 of LNCS, Springer, 1998, pp. 122–138. [69] M. Carbone, K. Honda, N. Yoshida, Structured Communication-Centred Programming for Web Services, in: ESOP, volume 4421 of LNCS, Springer, 2007, pp. 2–17. [70] I. Lanese, F. Martins, A. Ravara, V. Vasconcelos, Disciplining Orchestration and Conversation in Service-Oriented Computing, in: SEFM, IEEE Computer Society Press, 2007, pp. 305–314. [71] L. Caires, H. Vieira, Conversation types, Theor. Comput. Sci. 411 (2010) 4399–4440. [72] K. Honda, N. Yoshida, M. Carbone, Multiparty asynchronous session types, in: POPL, ACM Press, 2008, pp. 273–284. [73] R. Bruni, I. Lanese, H. Melgratti, E. Tuosto, Multiparty Sessions in SOC, in: COORDINATION, volume 5052 of LNCS, Springer, 2008, pp. 67–82. [74] N. Busi, R. Gorrieri, C. Guidi, R. Lucchi, G. Zavattaro, Choreography and Orchestration: A Synergic Approach for System Design, in: ICSOC, volume 3826 of LNCS, Springer, 2005, pp. 228–240. [75] M. Carbone, K. Honda, N. Yoshida, A Calculus of Global Interaction based on Session Types, in: DCM, volume 171(3) of ENTCS, Elsevier, 2007, pp. 123–151. [76] L. Wirsching, P. Gardner, Explicit fusions, Theoretical Computer Science 340 (2005) 606–630. [77] S. Bistarelli, F. Santini, A Nonmonotonic Soft Concurrent Constraint Language for SLA Negotiation, in: VODCA, volume 236 of ENTCS, Elsevier, 2009, pp. 147–162. [78] D. Bacciu, A. Botta, H. Melgratti, A fuzzy approach for negotiating quality of services, in: TGC, volume 4661 of LNCS, Springer, 2006, pp. 200–217. [79] R. De Nicola, G. Ferrari, U. Montanari, R. Pugliese, E. Tuosto, A Formal Basis for Reasoning on Programmable QoS, in: Verification: Theory and Practice, volume 2772 of LNCS, Springer, 2003, pp. 436–479. [80] R. De Nicola, G. Ferrari, U. Montanari, R. Pugliese, E. Tuosto, A Process Calculus for QoS-Aware Applications, in: COORDINATION, volume 3454 of LNCS, Springer, 2005, pp. 33–48. [81] J. Díaz, C. Rueda, F. Valencia, $\pi^+$-calculus: A Calculus for Concurrent Processes with Constraints, CLEI electronic journal 1 (1998). [82] M. Bartoletti, P. Degano, G. Ferrari, Security Issues in Service Composition, in: FMOODS, volume 4037 of LNCS, Springer, 2006, pp. 1–16. [83] A. Lazovik, M. Aiello, R. Gennari, Encoding Requests to Web Service Compositions as Constraints, in: CP, volume 3709 of LNCS, Springer, 2005, pp. 782–786. [84] M. Bravetti, G. Zavattaro, Contract Based Multi-party Service Composition, in: FSEN, volume 4767 of LNCS, Springer, 2007, pp. 207–222. [85] F. Banti, R. Pugliese, F. Tiezzi, An Accessible Verification Environment for UML Models of Services, Journal of Symbolic Computation 46 (2011) 119–149. [86] L. Bettini, R. De Nicola, M. Lacoste, M. Loreti, Implementing Session Centered Calculi, in: COORDINATION, volume 5052 of LNCS, Springer, 2008, pp. 17–32. [87] L. Cesari, R. Pugliese, F. Tiezzi, A tool for rapid development of WS-BPEL applications, SIGAPP Applied Computing Review 11 (2010) 27–40. [88] F. Montesi, C. Guidi, R. Lucchi, G. Zavattaro, JOLIE: a Java Orchestration Language Interpreter Engine, in: MTCoord, volume 181 of ENTCS, Elsevier, 2007, pp. 19–33. [89] G. Ferrari, R. Guanciale, D. Strollo, E. Tuosto, Event-Based Service Coordination, in: Concurrency, Graphs and Models, volume 5065 of LNCS, Springer, 2008, pp. 312–329. [90] G. Ferrari, R. Guanciale, D. Strollo, Event based service coordination over dynamic and heterogeneous networks, in: ICSOC, volume 4294 of LNCS, Springer, 2006, pp. 453–458.
The President’s Letter Hello, I hope everyone had as much fun at the Frozen Fantasy as Jimmy and I did. What a great party. A big “thank you” to Scott and Kathy for hosting again this year. Will they make it a three peat? I would also like to thank Hope and the whole social committee for a great job. Everything was wonderful. Things go much smoother when we all pitch in and help. There were some awesome entries in the Frozen Fantasy contest. I hear the judges had a hard time deciding the winners. Look in this issue for some of the great recipes. You may want to start perfecting your recipe for next year now. Last year we had less than ten entries, this year almost twenty. Who’s ready for SOS? Start making your plans now. Jimmy and I are excited to going this year. This is something we look forward to; good friends, good music and iced tea (you know the kind I’m talking about). Don’t miss the kick-off party at the Finish Line. Once you get a chance to recover from SOS prepare yourself for a big Halloween Party. Then it will be time for our Fall Cyclone. It’s the most fun you can have anywhere. I challenge anyone to find a better party for the money. Great dance floor, plenty of food and drinks. There is something for everyone. Line dances, shag classes, shooters, a Junior Shag Dance Team exhibition, and much more. Shrimp and grits are back again for breakfast, there is a shooter contest and bartenders from the beach clubs. We have added a pre-party on Thursday at the Finish Line, too. Tell your friends and neighbors and let’s make this the best party ever. Thanks, and remember to Have Fun! Dean Club Meeting Night Our monthly meeting will be held on Tuesday, September 2nd at Fat Boys in Mooresville. Social hour starts at 7pm. Come early to eat. The business meeting will start at 8pm. Jimmy Melton won the TSC Treasure Chest in August. Don’t miss your chance to win at the meeting this month. September Dance Lessons On Tuesday night, September 9th there will be a free night of line dance instruction. Kathy Thompson will be the teacher. The class will be from 7:30pm-9pm. There will be no lessons during the week of SOS. But on Tuesday September 23 and Tuesday September 30 we will offer one-night workshop classes with Ashley and Tobitha. Class will last from 7:30-9pm each night and will be $10 per person. This is a great opportunity to improve your shagging! Thoughts About Fun Times By Kathy Strantz Frozen Fantasy filled with fun, food, friends and frozen treats! Thanks to host/hostess Scott and Cathy Fletcher! Congratulations to all the Frozen Fantasy entrants! Congratulations to Wilma Laws for "Best Frozen Fantasy". Special thanks to all who shared their tasty treats, especially Barbara Zimmerman with her "5" specialties! Always great to have Roger Holcomb as DJ! Many thanks to the Fab "4", Lloyd, Scott, Jimmy, and Dick who came to my rescue when I got locked in the car with the alarm blaring and I couldn't figure out how to get out! Another great Frozen Fantasy! Thanks for more scrapbook memories! Looking forward to more fun, friends at SOS! On Friday, September 5th At The Finish Line This SOS Kickoff Party Will Be A Tacky Clothes And Beachware Event Prizes For Best Outfits! DJ: Roger Holcomb Wear your tacky best, and come have fun! Last Chance For Raffle Tickets The shag club meeting on Tuesday night is your last chance to purchase Fun Monday Raffle Tickets. TSC is selling the tickets to help pay for all the bands and entertainment during Fun Sunday and Fun Monday. (Any excess funds taken in are donated to charity.) A mere $20 helps this cause and gives you ten different chances to win up to $1500. The shag club can also win if we sell the most tickets in our category of membership numbers. Please see Mike Rink at the meeting to buy in on this. You don’t want to miss a chance to win, and you can help a worthy effort. SOS Activities For Members And Guests! Don’t forget to take plenty of Cyclone business cards to give out during the week. Please make sure you see our flyers on display when you enter the lounges, too. If they don’t have them, put some out or let someone know who can. Thank you! Saturday, Sept 13: Fat Harold’s Back Deck at 5pm. It’s our famous “Tea Party”. (For new members, that’s Long Island Iced Tea.) We’ll have food, too. Come to have fun. It’s only $2 for members and $10 for guests. Fun Sunday, Sept 14: Live music in the vacant lot on Ocean Blvd. between the Pirates Cove and the O.D. Arcade. Fun Monday, Sept 15: Live music on Main Street. We will have a tent not far from the front of the stage. Please find our tent and help us pass out Cyclone fans and flyers. We will have a lot of fun! We may need help transporting the tent and other items to the beach. Can you help? If so, please let Mike Rink know ASAP. Thursday, Sept 18. Eastport Golf Course. Captain’s Choice golf outing for men and women, members and guests. Cost is only $30 for us. They are also giving us $2 beer. The tee time is 11am, so please be there by 10:00 or 10:30 so we can pick teams, etc. Please let us know ASAP if you know of a guest who would like to come. Friday, Sept 19. Pirate’s Cove at 5pm. Again, there will be food! So come early or risk missing out. We’ve also invited some of our fellow shag club Presidents to join us. We all want to encourage them to attend our Fall Cyclone while they are there. Again, it’s $2 for members and $10 for guests. Sunday, Sept 21: Fat Harold’s at 8pm. For those lucky enough to still be there, we will have our annual “I’m not leaving SOS and you can’t make me” party. This event keeps growing, and it never fails to create some special memories. Please try to wear a TSC hat, visor or button, or Cyclone shirt at these events if possible. We like to be seen as a group. If you need TSC items see Frances Smith or call her (704-662-9864). An Unsolved Piece Of History? Last month we showed you a photo that can be found at the pier at Cherry Grove and at “The Shack” restaurant. Chances are you might have seen this photo or postcard. But no one identified the well-known TSC member in it. Can you spot him/her? Our Frozen Fantasy Party Hosts By Marylee Kreamer I’m sure there will be lots of articles about all the fun and funny things that were part of Frozen Fantasy 2014. As I’m right up against Mike’s August newsletter deadline, I won’t even try to describe how fabulous the party was. I’ll leave that to others. But I do want to take this opportunity to thank our hosts who welcomed this rowdy crowd into their beautiful home on the lake. We could have had a great party without them lifting a finger. But it’s obvious they worked hard to make sure everything was perfect. There were even some notable up-grades such as the great new bar. I think I may have missed my calling. I felt right at home behind it, listening to everyone’s troubles. If only I knew how to actually mix drinks I think I’d be a perfect bartender. Maybe by next year. So, “thank you” to Cathy and Scott and all those who helped make this year’s party one of the best! The 2014 Fall Cyclone We have plenty of Fall Cyclone Flyers and business cards. Please help us pass them on shag nights, at shag events and at SOS. We will be giving out Fall Cyclone hand fans on Fun Monday during SOS. Come out to our tent and help build some excitement. We all need to help with the promotion of our event. It’s going to be our best party, ever, but prospective guests may not know that if you don’t tell them. Spread the word! FALL CYCLONE PARTY An Award Winning CBMA “Best Event” GREAT DJ’s. FREE shag workshops. FREE meals & munchies. FREE adult beverages. HUGE floor & MUCH more. Join your friends the first full, 3-day weekend in November. www.FallCyclone.com The Cyclone Is So Much Fun It Will... Blow You Away! Email firstname.lastname@example.org 24hr hotline 704-892-9044 Hours: Thursday 7pm-1am Friday 6pm-1am Saturday 9:30am-1am (Workshops at 10, 11 and 12) Sunday 9am-1pm For more information about this party, regular updates and much more please visit www.GoShagging.com If you have a question not answered on our web site, please E-mail email@example.com or call our hotline at 704-892-9044. (1#) FAIRFIELD INN BY MARRIOTT ----- $89 -------- 704-663-6100 (2#) Holiday Inn Express & Suites --------------- $87 ------- 704-662-6900 (3#) Wingate by Wyndham ------------------------ $90 -------- 704-664-4900 (4#) Hampton Inn & Suites ---------------------- $102 ------- 704-660-7700 (5#) Quality Inn ---------------------------------- $75 -------- 704-664-6556 NOTE 1: You must mention the Fall Cyclone shag party to get these rates. NOTE 2: As with other “special events”, any reservation canceled less than 14 days in advance will result in a one-night charge. PLEASE do not drink & drive! Our shuttle buses will be making non-stop trips to and from the rink Friday & Saturday. PLEASE PRINT CLEARLY Tickets include a one-year Associate Membership in Twister’s Shag Club. See our web site or ask for more details. LIMITED TICKET SALES! Order EARLY For A Chance To Win A Private Lesson With A Workshop Instructor Or Another Great Prize! Tickets Are Only $70 each if postmarked by 09/01, then $75. (One-Day Tickets $45) Name(s): ____________________________________________________________ Address: ____________________________________________________________ City/State/Zip: _______________________________________________________ Phone: ___________________ - ___________________ - ___________________ Print E-mail: ________________________________________________________ Please double-check your email address, if listed above. We may send you a notification after your order is filled. Include a self-addressed, stamped envelope or $1. TSC, PO Box 2310, Cornelius NC 28031 Teamwork And The 2014 Fall Cyclone By Peggy Cavin What does teamwork mean? Is it saying things like “that’s not my job”, or “he doesn’t do as much as I do”? How about, “I don’t have time to help”, or “I don’t get paid to do that”? Or is it working together, helping each other, sharing the workload to accomplish a task and doing it well? Our club members know the answer! There is not another time during the year that teamwork is needed more than at our Fall Cyclone Party. Teamwork is part of the reason the Cyclone was voted best event for four years in row by the Carolina Beach Music Awards. Teamwork is also the reason we, as a club and as individuals, are proud to be members of Twisters Shag Club. I’ve been part of this club since day one. Over the last 24 years I’ve seen the Fall Cyclone number of attendees triple. A lot of teamwork made that happen. When you join Twisters you commit to “work” the Fall Cyclone. I personally think the word “work” is not the right one to use. I think it should be “helping”. We all enjoy helping make sure our party is the best and that our guests are happy. Ask a TSC member about our party and you hear how much fun they have before, during, and after while “helping”. **Before:** It’s a lengthy process to make everything “perfect”. It takes a lot of teamwork over a couple days to get everything done. But everyone is excited. **During:** There are times you may have to spend as much as two hours at your station meeting our guests, seeing the smiles, and hearing how much fun they are having. Tough? Not! **After:** When the music stops, guests say their goodbyes and leave behind friends old and new while vowing to come back next year. Then cleanup begins. For a couple hours it’s more teamwork turning the skating rink back into a bingo parlor. The whole group is tired. But there is an air of satisfaction and emotional high that we’ve “done it again”! It’s funny that over the years even though our party has grown in number of guests, our party gets easier and easier to put on simply because of teamwork. We have great records put together by our dedicated chairpeople. So anyone can come in and do a great job with any of our committees. One of our newest members recently referred to our club as “family”. Another new member said that “we” were his only family. Both descriptions sound good to me. Someone recently told a perspective member that they shouldn’t join Twisters because they would “have to work” the Cyclone. It shocked that person. But shouldn’t that actually be the reason people want to join our club? If you are part of the effort, you are also part of the success. And then you get to enjoy the benefits of your labor all year long. Some people think that we are a big club based on what we accomplish. But we are actually one of the smaller clubs in the Association. It’s teamwork that makes us seem bigger than we are. So when the doors open at the Fall Cyclone, we want to show our guests what we’ve done and make sure they have a great time while they are with us. That’s teamwork, Fun Bunch style. Keep up with the latest TSC news by logging on to our web site (www.GoShagging.com) and our Facebook page. We also make a monthly phone call to members, and send out weekly email notes. If you aren’t getting either, please let us know at firstname.lastname@example.org or at 704-534-4151. Fall Cyclone Memories By Diane Millman Some of you non-TSC Members who are reading this are probably wondering if you should go the Fall Cyclone. I say yes, yes, yes! I went for my very first time two years ago. I did not know how to shag. I only knew one person there; my date. Reluctantly he talked me into going. So he bought the tickets and reserved the room. Reserved the room? We live in Statesville. Why on earth would we need a room? Bill said that we need to experience the entire weekend, including the shuttle busses back and forth from the dance to the hotel. O.K. The weekend came and we checked into the room. And nice room it was. We dressed, went to dinner and now it was time to hit the shuttle busses. The ride there was very subdued. (It was very different on the way home!) We arrived at the dance and hundreds of people were there. Wow. It was overwhelming. Well, he knew lots of people and several ladies wanted him to dance. That was OK with me, as I did not know how to shag. After several dances, old friends came to chat with Bill. The men talked and a lady chatted with me. She said to me she had not seen me dancing that evening. I replied that I was a Yankee and did not know how to shag. She said, in her very sweet, southern charm, "Oh my dear, all you have to do is move your feet and look pretty. And I know you can do that". That's all the encouragement I needed. Bill and I “danced” for a long time and finally he looked at me strangely. I asked what was wrong and he said I was putting too many steps into the dance! I reminded him that all I had to do was move my feet and look pretty and I was doing that, so he should just keep dancing. We had a wonderful time. We laughed and danced all night long. The food was good, the drinks were plenty, and the music was endless. Then it was time to get back on the shuttle bus. But the night was not over yet. We had singing and dancing and joke telling and sleeping. A great time was had by all. And that was only Friday. Saturday we had lessons, a junior shagger demonstration, more food and drinks, new DJ’s, and lots more dancing. Sunday we had breakfast and more dancing. If you left the weekend and your feet did not hurt, you didn’t do it right. So if you’re on the fence about going to the Fall Cyclone, please consider the positive side. You won’t regret it. I’ll see you there! A Note From Walter Smith Hello shag friends! By the time you read this, I hope to be back with you on Friday nights and other TSC activities. But for right now it’s been three weeks since my hip replacement and I’m convalescing at home. To be honest with you, I thought that this would be a piece of cake. I thought it would be simple… - Go to hospital, have surgery - Spend the day at the hospital and go home the next day - Convalesce for a couple of weeks, at home - Pick up my life where it left off Sounds pretty simple, huh? Well, it turns out that I may have been a little overly optimistic! Recovery and physical therapy have been harder than I thought. I continue to exercise two to three times daily, and have increased my amount of walking to about ¾ mile a day. I’ve almost weaned myself off of the narcotics. (I have to get off the narcotics before they’ll let me drive!) Having said all that, I’m totally wiped out at the end of the day. I know it sounds crazy, but late in the day it seems to be a struggle just to get to my bed. Oh well, I know that with continued effort and all your thoughts and prayers, I’ll be “back in the saddle again” soon! I wish to express my deep appreciation for all your thoughts and prayers as well as the multitudes of cards and notes I’ve received. I thank all of you from the bottom of my heart. Have a drink for me on Friday. I hope to see all of you in the next couple of weeks! Editor’s Note: It was good to see Walter out for a little bit on Friday, August 22nd. We hope that is a sign of things to come! What’s Happening In The Shag World GoShagging.com and our Facebook page also have info. Sept 2: TSC Monthly Meeting at Fat Boys. Sept 5: SOS Kickoff Party at The Finish Line Lounge Sept 12-21: Fall SOS at North Myrtle Beach Oct 10, 11: Shag Attack / Hall Of Fame Inductions Oct 31: Multi-Club Halloween & Costume Party at The Finish Line Lounge. More details to be announced, soon. Nov (6) 7-9: The Fall Cyclone. See www.FallCyclone.com. November 15: Piedmont Shag Association’s 24th Annual Shaggin’ Gobbler Get-Together. $20 per person until 11/02, then $25. DJ Fast Eddie Thompson. Meal Includes deep fried turkey, ham, potato salad, slaw, green beans, and desserts. Beer and free pour while it lasts. email@example.com or 704-886-0863. or www.PSAShaggers.com for more information. We will have some space saved for you in the next issue! Send us your memories or your impressions of any TSC event. We want to hear from you. Don’t worry, you don’t have to be a great writer. Many of you will return from SOS with new memories. Send in a few of them ASAP. Thanks! Upcoming Birthdays | Name | Date | Name | Date | |---------------|--------|---------------|--------| | Bill Millman | 09/04 | Dennis Pethel | 10/03 | | Mike Rink | 09/16 | Brent Nicholas| 10/23 | | Vickie Abernathy | 09/17 | Paula Nicholas| 10/23 | | Kathy Thompson| 09/24 | Gordon Barnes | 10/25 | | Alma Brown | 09/27 | Hope Wray | 10/25 | | Betsy Beard | 09/29 | Kathy Strantz | 10/31 | Personal Messages Dear Bill, Happy birthday to the love of my life. Thank you for giving me shagging. All my love, Diane ----------------------------- Marylee, So that’s the way it seems to be, I think about you constantly. Morning, noon, and nighttime too. Know why I think of you so much? I do, because I think so much of you! Happy 34th (or 42nd, but who’s counting?) Love ya Mel, Ken Good Times With Good Friends By Scott Fletcher The Frozen Fantasy Party was a blast. Cathy and I really enjoyed every minute of it. I believe we had nearly 80 guests. I think everyone had a good time, especially bushmaster George. I hope you are okay and didn’t get hurt. By the way, the bush is fine. I don’t know what got into Mike Rink trying to trip you like that. 😊 But I wish I had that fall on video. I knew it was going to be a humdinger when Bob and Cindy Rea showed up. We had a great time. Thanks go out to everyone who helped make this a success. New Member Spotlight Twisters is happy to welcome four new members to our club! Debbie and Rich Hardick have jumped in with both feet (actually, all four) and have immediately involved themselves in many activities within our TSC world. They’ve taken lessons, caravanned along for parties with other clubs, competed in the Frozen Fantasy contest, missed very few nights at The Finish Line, and are already asking about their assignments for Fall Cyclone. From the looks of it, shag dancing came naturally to both of them. Maybe that’s because they’ve danced before and are both into music. They played in a band together for 15 years. Currently living in Mooresville, Rich is retired and works as a medical records warehouse manager. Debbie is a homemaker and works in direct marketing sales. They have 6 children and 7 ½ grandchildren ages 1-11 (5 boys, 2 girls, and 1 on the way). Debbie enjoys scrapbooking, old movies, history, shag dancing, and bass playing. Rich claims to be “good at everything but master of none”. He golfs, shag dances, fishes, and loves music. Ask him about throwing out his line in the morning on vacation and catching baby sharks. Debbie first became interested in shag dancing when she saw a video on YouTube. From there they took lessons and have been struttin’ their stuff ever since at various shag events. They’re looking forward to SOS, Fall Cyclone, learning new steps, and making new friends! Debbie and Rich – We’re looking forward to getting to know you better, also! Welcome to The Fun Bunch! Diane and Bill Millman live in Statesville and were just married this past April during SOS. They honored each of us by inviting us to their ocean-side wedding ceremony and reception. Not all of us could be at the beach at that time but, from those who could, we all heard about the perfect setting, the beautiful bride, and the dashing groom. Love, laughter, and dance at the beach . . . What more could you ask for? Not only is Bill a good-looking groom, but he’s smart, too. He figured out a while ago that if he danced, he could get the cute girls to go out with him. Bill – Is that how you snagged Diane? It appears so. When asked how she got interested in shag dancing her answer was, “Bill”. They both enjoy dancing, golfing and gardening. Bill also skis. He is a competitive snow ski racer who was ranked as high as number 3 in national competition. Impressive! They have a son with two cute boys age 5 and 2. And they have a daughter, very successful in sales, who has a son, age 21, who is a talented chef in Charleston. Bill, self-described as “the greatest salesman in the world” (and so modest, too), spends his days now employed part-time and working in the garden. Diane also works and finds videos online of cool dance moves which she forwards to Bill to learn before their next dance outing. Both are looking forward to Cyclone, which was how they heard about Twisters in the first place. They said that what helped them decide to join Twisters was that we were a fun group. Well, Diane and Bill, we think you’re fun, too! Welcome to Twisters! Recently, Ken and I decided to take a couple days off from work and go to the mountains. I’ve always been a “beach girl” so I wasn’t quite sure if Blowing Rock, NC would be worth three of my precious vacation days. Boy, was I pleasantly surprised! Even with some foggy weather, the place was beautiful. I loved hiking the trails and browsing through those great little shops in downtown Blowing Rock. But what could we do in the evenings? We got online and found out that the Boone Shag Club danced at the Moose Lodge just 15 minutes from our cottage. We grabbed our dancing shoes and headed out. What a nice evening we had! The members of the Boone Shag Club were so welcoming and interested in us and our club. Though I had scrounged a Cyclone flyer from under the seat of my car to take in and ask them to share, it wasn’t necessary. Chip Norwood, their President, had his stack of flyers from our mailing and made sure the members all knew about our event. We shagged, line danced, and socialized for a couple hours and then headed back to our cottage, feeling a part of something bigger than ourselves. It was much the same when a group of us caravanned to Hickory for their club’s monthly event. I have to admit that I didn’t even know that they danced every month, but was eager to go when the suggestion was made the night before during one of our evenings at The Finish Line. When we arrived there were many familiar faces of people who regularly join us on Friday nights. They were very welcoming. We were able to have a Twisters table, the drinks were cheap, and the food fattening! A good time was had by all! While on the topic of great times with fellow shaggers, I can’t forget to mention Sandy Beach Shag Club’s up-coming Sandkicker Megafest IV. I won’t be able to attend this year because of a family commitment but, if there are still tickets available, I highly recommend you go. It’s the Sunday of Labor day in Morganton, NC. Ken and I have been the last couple years and have had fun! It’s great to live where members of your shag family are never far away. You never feel like a stranger when shag dancing! A funny thing happened on the way to the office on Monday. I found myself routed to Houston TX. I thought that I had made it clear to the "higher ups" years ago that I break out in hives if I have to cross the Mississippi River, and that fat boys don’t like 100 plus degree weather! To my chagrin, I have yet again had my wishes ignored. My needs were disregarded. My mental state was poo poo’d upon to the extent that even my dog called to say, "Suck it up. Doggie needs food!" So off I trudged to the far reaches of the universe and am now only part way back on Friday as "da rats" are sending me to New England. Woe is me. Kathy and Scott hosted a great wingding again this year and should receive many thanks and humanitarian awards for their infinite patience, kindness, and psychological trauma they suffered enduring our occupation of their residence. Y’all is da best! I would also like to re-thank all those who expressed relief that our son Trey is back from his deployment and with his family. Your concern is greatly appreciated. I would also ask that you continue with your good thoughts and prayers for our service members at large, especially those deployed. Kris and Teresa Sloop's son JT is deployed right now. Well, it’s time to start grabbin’ gears to continue the arduous trek towards New England. Keep yours noses clean and the skirts short and I’ll see ya where the road ends and the party begins! Frozen Fantasy Party 2014 By Peggy Cavin Our Frozen Fantasy Party was an awesome event! We had about 80 club members, friends, and friends of friends to show up and spend the gorgeous Saturday afternoon on Lake Norman. Scott and Cathy opened their lakefront home to us and were gracious hosts. People came by car, boat and several neighbors even walked over to enjoy the afternoon with us. Students in this session of shag lesson were also invited to attend the party, giving them a firsthand look at the fun we have in Twisters. The party started at 2 PM and ran until late that night. Fat Boys catered the party with everyone's favorite hot dogs, hamburgers, grilled chicken and all the trimmings. The club provided munchies that were enjoyed all afternoon. Some club members brought their favorite appetizers for everyone to enjoy, too. Roger Holcomb played music all afternoon. There was a little dancing on the lawn in the afternoon and plenty of socializing with old and new friends. At 7 PM we started our famous Frozen Fantasy contest. This is where members prepare their cakes, pies, frozen drinks, shooters, etc. that are spiked with their favorite adult beverage. This year we had nearly 20 entries. The entries were judged on taste, presentation and that “kick” you look for in an adult “Fantasy” food or drink. Five lucky people judged the contest. It’s a job everyone wants. But, it’s a tough job because all entries are winners. Lucky me, I was serving as the club photographer standing close to the judges and was passed all the leftovers. Thanks go to everyone who entered our Fantasy contest and all who came to enjoy the fun afternoon. I’d like to thank our social chairs, Hope and Mimi and their committee for a job well done. This is definitely one Twisters party you don’t want to miss. I look forward to gathering in the summer of 2015. Maybe I’ll get to be one of the judges! Learning The Ropes (And Steps) By Debbie Hardick While enjoying the beautiful day at Scott and Cathy’s home for the Frozen Fantasy, I had a conversation with Wilma. She asked me, “Deb, what had you and Rich been doing before you joined TSC”? Without hesitation I told her, “Not a damn thing”. She laughed at me and we talked about the transition of a social life for Rich and I living in New Jersey to a “lack of a social life” when we moved to Mooresville in 2005. Rich and I were dance partners when we met in 1986. We country line danced (countless line dances) did the two-step, the waltz and other partner dances brought in with country music. We started learning some of those dances by throwing quarters into the jukebox for music. It became so popular that lessons became available from an instructor about an hour before the band would start to play. Yes, New Jersey had many local country bands and a lot of dancers. It was something that we did every weekend for about 19 years. During part of that time Rich and I were part of a country band; he on drums and I played bass. We would manage to slip in a dance or two when the band would take a 15-20 minute break. This was our social life and it meant the world to us. Sundays were family days. The kids would come with us and learn to dance as well. It was the best. When we moved to Mooresville, we met many people but did not form any close friendships. We met a lot of young people. They were all great, but I was just like a mother figure and Rich was just “Pops”. The people who are our age didn’t seem to have any interest in having a social life. It appeared that the couch was the happening place for them when the weekends came around. All I can say about doing that is, “Boring”! It took us nine long years to find shag lessons and the Twisters Shag Club. Rich and I didn’t even know what shagging was. We did see something on YouTube that turned out to be a junior shag competition, and we were very impressed with what we saw. I can’t tell you how much we wish we had known about shag lessons and TSC a long time ago. I feel like I’ve missed nine years of fun! But here we are now! We had our first lessons in May and June, and our first time at the Finish Line (right after our very first lesson). I was pleasantly surprised when the DJ called out a “Tush Push” as I had learned that line dance in NJ a long time ago. We also had our first bus trip to Winston Salem and a night out with the Lake Hickory Shag Club. Our first Frozen Fantasy party was a blast! We enjoyed great music, good friends, wonderful food and then the contest for the best drink, shooter, dessert, etc. Now, we are learning the ropes and know what we are up against for next year’s Frozen Fantasy Contest. I can’t wait. We look forward to learning more steps with Ashley and Tobitha. We are excited about sharing a few days with everyone at SOS (another first) and then our first Fall Cyclone. We can’t think of a better way to be spending our retirement years than socializing with all of you, the Twister’s Shag Club Fun Bunch. It is the very best! Thank ya’ll for being so welcoming! The 2014 Frozen Fantasy Party By Hope Wray Once again another successful event. Of course it was all possible because of Scott and Cathy. Thank you! The food our members shared with us was great. I am sure there will be recipes shared. I also want to thank the Massengills for the boat ride to and from. It was an adventure. So glad the weather held up for this party compared to last year’s. Some came already in the party spirit and others (me) are doomed to be the “DD” for our other half. Maybe one day? As the party was winding down, I started gathering our things to get back on Nancy’s boat. I was looking for Barry and could not find him. Considering his “state of mind”, I thought the worst. Perhaps he had fallen into the lake? No! We found him in the front yard talking to the DJ. I am going to take Ken’s advice and put a bell around Barry’s neck for the next party. I think I still have a cowbell in the attic. Attention TSC Members: Everyone who has been to this brunch says it is great. Try it! George Pappas’ Victory Lanes invites you to eat, drink, play, live, laugh, bowl! Sunday Brunch and Bowl An amazing buffet plus two games of bowling (includes shoes) for just $14.95/person ($9.95 for Brunch only) $7.95 for 12 and younger Featuring your Sunday Brunch favorites plus a few “Specials of the Week” each week including: Scrambled Eggs, Bacon and Sausage Strata, Hashbrown Casserole, Breakfast Potatoes, Bacon, Sausage, Ham, Turkey, Chicken, Grilled Asparagus Breakfast sandwiches, Fresh Fruit Tart, French Toast, Turkey Stuffing, Ham Baked Chicken, Mashed Potatoes and Gravy, Fresh Veggie Blend, Tossed Salad, Pasta Salad, Mac n’ Cheese, Chicken Tenders, Build Your Own Hot Dog, French Fries, Brownies, Cheesecake. 10:00 a.m. - 2:00 p.m. Reservations suggested. Please call 704-664-2695 125 Morlake Drive, Mooresville, NC 28117 • 704-664-2695 • www.georgepappasvictorylanes.com Lounge Manager Willie keeps the food hot and tasty at all times! A Fantastic Frozen Fantasy Party By Susan Dahl This was the best Frozen Fantasy. We need to get all the pictures together and send them to the Carefree Times. Now, I want all the recipes. Whoever made the Orgasm Chocolate Cake should have cut thicker slices for the judges. I think that was the cake with some pecans. I took a big slice home to have with my coffee after my "good-for-me" breakfast on Sunday. It was out of this World! The judges got only a sliver, so we really couldn't get the full delectable taste with the liquor. It would have won a prize. I need the recipe, and I don't usually bake cakes. I couldn't believe that after all the alcohol tastings I could still drive. I drank enough water to swish it down, I guess. It was great having all the new people at the party. They just kept coming in. Thanks again to Scott and Kathy for having us. Their yard was perfect with all the shade. A big "Thank You" to Hope and Mimi and the committee. I was supposed to help clean up, but every time I went to the kitchen, it was already done. "My bad". I did get the cake and a few other goodies to go. By the way, the Red Velvet Cake should have had some liquor in it, and should have been entered in the contest. Congrats' to all the winners. We are still having great crowds at The Finish Line. Glad to see Frances and Walter there. He is doing so well. See you all at the meeting and The Finish Line, and SOS, for more fun for The Fun Bunch. Thought For Today: A person who can't lead and won't follow makes an effective roadblock. A Few Frozen Fantasy Recipes There were nearly twenty items in this year’s Frozen Fantasy contest. Here are recipes for just a few of them. We hope to have more of them in coming issues of our newsletter. Cucumber Dip By Hope Wray 1 (8 oz.) pkg. cream cheese 1 medium onion grated 1 medium cucumber graded (unpeeled) 1 Tbsp. sugar 2 heaping Tbsp. mayonnaise Salt to taste Blend all ingredients with mixer. Red Rooster Recipe By Richard Hardick 1 1/2 quarts cranberry juice cocktail 6 oz frozen orange concentrate 2 cups vodka Mix contents together put in freezer for two hours. It should be very slushy or frozen at this point. So scrape out and serve. Yum And Coke Milkshakes By Ken Kreamer 1 part Dark Rum 2 parts Coke 4 parts vanilla ice cream Blend at slow spread until mixed. Garnish with whip cream & cherry Serve with straw & spoon Lie down! Dirty Pirate Popsicles By Nancy Massengill 2 1/2 cups Coke 1/3 cup Captain Morgan Spiced Rum 1/3 cup Kahlua Place all ingredients in large glass pitcher. Stir. Pour into molds. Freeze. They will freeze better with flat Coke. I really think would be better served as an icee as they tend to melt quickly. They are a good take on the traditional rum and coke. DANCING MAKES YOU SMARTER It's been scientifically proven that the only physical activity to offer protection against dementia is frequent dancing. - Reading - 35% reduced risk of dementia - Bicycling and swimming - 0% - Doing crossword puzzles at least four days a week - 47% - Playing golf - 0% - Dancing frequently - 76%. Study made suggestion: do it often. Seniors who took dance lessons 4 days a week had a measurably lower risk of dementia than those who did it only once a week.
Preskusne metode za ugotavljanje prispevka konstrukcijskih elementov k požarni odpornosti - 1. del: Vodoravne zaščitne membrane Test methods for determining the contribution to the fire resistance of structural members - Part 1: Horizontal protective membranes Prüfverfahren zur Bestimmung des Beitrages zum Feuerwiderstand von tragenden Bauteilen - Teil 1: Horizontal angeordnete Brandschutzbekleidungen Méthodes d'essai pour déterminer la contribution à la résistance au feu des éléments de construction - Partie 1: Membranes de protection horizontales Ta slovenski standard je istoveten z: CEN/TS 13381-1:2005 ICS: 13.220.50 Požarna odpornost gradbenih materialov in elementov 91.080.01 Gradbene konstrukcije na splošno SIST-TS CEN/TS 13381-1:2006 en iTeh STANDARD PREVIEW (standards.iteh.ai) SIST-TS CEN/TS 13381-1:2006 https://standards.iteh.ai/catalog/standards/sist/e544b945-7c10-4753-ab2c-3e0c5e65adb6/sist-ts-cen-ts-13381-1-2006 Test methods for determining the contribution to the fire resistance of structural members - Part 1: Horizontal protective membranes Méthodes d'essai pour déterminer la contribution à la résistance au feu des éléments de construction - Partie 1: Membranes de protection horizontales Prüfverfahren zur Bestimmung des Beitrages zum Feuerwiderstand von tragenden Bauteilen - Teil 1: Horizontal angeordnete Schutzbekleidungen This Technical Specification (CEN/TS) was approved by CEN on 15 November 2005 for provisional application. The period of validity of this CEN/TS is limited initially to three years. After two years the members of CEN will be requested to submit their comments, particularly on the question whether the CEN/TS can be converted into a European Standard. CEN members are required to announce the existence of this CEN/TS in the same way as for an EN and to make the CEN/TS available promptly at national level in an appropriate form. It is permissible to keep conflicting national standards in force (in parallel to the CEN/TS) until the final decision about the possible conversion of the CEN/TS into an EN is reached. CEN members are the national standards bodies of Austria, Belgium, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, Netherlands, Norway, Poland, Portugal, Slovakia, Slovenia, Spain, Sweden, Switzerland and United Kingdom. # Contents | Section | Page | |------------------------------------------------------------------------|------| | Foreword | 3 | | 1 Scope | 4 | | 2 Normative references | 4 | | 3 Terms and definitions, symbols and units | 5 | | 4 Test equipment | 6 | | 5 Test conditions | 6 | | 6 Test specimens | 8 | | 7 Installation of the test construction | 12 | | 8 Conditioning | 12 | | 9 Application of instrumentation | 12 | | 10 Test procedure | 15 | | 11 Test results | 16 | | 12 Test report | 17 | | 13 Assessment | 17 | | 14 Report of the assessment for calculations | 19 | | 15 Limits of applicability of the results of the assessment | 20 | | Annex A (normative) Exposure to a semi-natural fire | 31 | | Annex B (normative) Measurement of properties of horizontal protective membranes and components | 35 | | Bibliography | 38 | Foreword This Technical Specification (CEN/TS 13381-1:2005) has been prepared by Technical Committee CEN/TC 127 “Fire safety in buildings”, the secretariat of which is held by BSI. This Technical Specification has been prepared under a mandate given to CEN by the European Commission and the European Free Trade Association, and supports essential requirements of the Construction Products Directive. As there was little experience in carrying out these tests in Europe, CEN/TC127 agreed that more experience should be built up during a pre-standardization period before agreeing text as European Standards. Consequently all other Parts are being prepared as European Prestandards. This Technical Specification is one of a series of standards for evaluating the contribution to the fire resistance of structural members by applied fire protection materials. Other Parts of this ENV are: Part 2: Vertical protective membranes, Part 3: Applied protection to concrete members, Part 4: Applied protection to steel members, Part 5: Applied protection to concrete/profiled sheet steel composite members, Part 6: Applied protection to concrete filled hollow steel columns, Part 7: Applied protection to timber members. The fire protection capacity of the horizontal protective membrane can be nullified by the presence of combustible materials in the cavity above the membrane. The applicability of the results of the assessment is limited according to the quantity and position of such combustible materials within that cavity. The amount of combustible material permissible in the cavity should be given in national regulations. Annexes A and B are normative. Caution The attention of all persons concerned with managing and carrying out this fire resistance test, is drawn to the fact that fire testing can be hazardous and that there is a possibility that toxic and/or harmful smoke and gases can be evolved during the test. Mechanical and operational hazards can also arise during the construction of test elements or structures, their testing and the disposal of test residues. An assessment of all potential hazards and risks to health should be made and safety precautions should be identified and provided. Written safety instructions should be issued. Appropriate training should be given to relevant personnel. Laboratory personnel should ensure that they follow written safety instructions at all times. The specific health and safety instructions contained within this European Technical Specification should be followed. WARNING: When performing this test method, laboratories should expect that there may be significant quantities of smoke released. This smoke release is expected to be very significant where the fire test involves timber and timber based components. Laboratories should ensure that appropriate smoke extraction facilities are provided. According to the CEN/CENELEC Internal Regulations, the national standards organizations of the following countries are bound to announce this CEN Technical Specification: Austria, Belgium, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Iceland, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, Netherlands, Norway, Poland, Portugal, Slovakia, Slovenia, Spain, Sweden, Switzerland and United Kingdom. 1 Scope This Part of this European Prestandard specifies a test method for determining the ability of a horizontal protective membrane, when used as a fire resistant barrier, to contribute to the fire resistance of horizontal structural building members. This European Technical Specification contains the fire test which specifies the tests which are carried out whereby the horizontal protective membrane, together with the structural member to be protected, is exposed to a fire test according to the procedures defined herein. The fire exposure, to the temperature/time curve given in EN 1363-1, is applied to the side which would be exposed in practice and from below the membrane itself. The test method makes provision, through specified optional additional procedures, for the collection of data which can be used as direct input to the calculation of fire resistance according to the processes given within EN 1992-1-2, EN 1993-1-2, EN 1994-1-2 and EN 1995-1-2. A related test method for determining the contribution to the fire protection of vertical structural members by vertical protective membranes is given in Part 2 of this ENV. This European Technical Specification also contains the assessment which provides information relative to the analysis of the test data and gives guidance for the interpretation of the results of the fire test, in terms of loadbearing capacity criteria of the protected horizontal structural member. The limits of applicability of the results of the assessment arising from the fire test are defined, together with permitted direct application of the results to different structures, membranes and fittings. This European Technical Specification applies only where there is a gap and a cavity between the horizontal protective membrane and the structural building member. Otherwise the test methods in ENV 13381-3, ENV 13381-4 or ENV 13381-8, as appropriate, apply. Tests shall be carried out without additional combustible materials in the cavity. Annex A gives details of assessing the performance of the ceiling when exposed to a semi-natural fire. 2 Normative references The following referenced documents are indispensable for the application of this document. For dated references, only the edition cited applies. For undated references, the latest edition of the referenced document (including any amendments) applies. EN 1363-1, Fire resistance tests — Part 1: General requirements EN 1365-2, Fire resistance tests for loadbearing elements — Part 2: Floors and roofs EN 1992-1-1, Eurocode 2: Design of concrete structures — Part 1-1: General rules and rules for buildings EN 1992-1-2, Eurocode 2: Design of concrete structures — Part 1-2: General rules — Structural fire design EN 1993-1-1, Eurocode: 3 Design of steel structures — Part 1-1: General rules and rules for buildings EN 1993-1-2, Eurocode 3: Design of steel structures — Part 1-2: General rules — Structural fire design EN 1994-1-1, Eurocode 4: Design of composite steel and concrete structures — Part 1-1: General rules and rules for buildings EN 1994-1-2, Eurocode 4: Design of composite steel and concrete structures — Part 1-2: General rules — Structural fire design (including Technical Corrigendum 1:1995) EN 1995-1-1, Eurocode 5: Design of timber structures — Part 1-1: General rules and rules for buildings EN 1995-1-2, Eurocode 5: Design of timber structures — Part 1-2: General rules — Structural fire design ENV 13381-4, Test methods for determining the contribution to the fire resistance of structural members — Part 4: Applied protection to steel members ENV 13381-5, Test methods for determining the contribution to the fire resistance of structural members — Part 5: Applied protection to concrete/profiled sheet steel composite members ENV 13381-7, Test methods for determining the contribution to the fire resistance of structural members — Part 7: Applied protection to timber members EN ISO 13943, Fire safety — Vocabulary (ISO 13943:2000) ISO 8421-2, Fire protection — Vocabulary — Part 2: Structural fire protection 3 Terms and definitions, symbols and units 3.1 Terms and definitions For the purposes of this European Technical Specification, the terms and definitions given in EN 1363-1, EN ISO 13943 and ISO 8421-2 and the following apply: 3.1.1 horizontal structural building member horizontal structural element of building construction which is loadbearing, separating and which is fabricated from concrete, steel, steel/concrete composite or timber 3.1.2 horizontal protective membrane any horizontal membrane or ceiling lining plus any supporting framework, hangers, fixings and any insulation materials which is either suspended from or attached directly to a structural building member, or is self supporting and fixed beneath a structural building member, and which is intended to give additional fire resistance to that structural building member The horizontal protective membrane does not form any part of any loadbearing part of the structure and can comprise multiple layers of materials 3.1.3 separating gap distance between the uppermost surface of the horizontal protective membrane and the lowest surface of the underside of the structural building member 3.1.4 cavity whole void or voids between the uppermost surface of the horizontal protective membrane and the highest surface of the underside of the structural building member 3.1.5 horizontal protective membrane test specimen full horizontal protective membrane assembly submitted for test, including typical fixing equipment and methods and typical features such as insulating materials, light fittings, ventilation ducts and access panels 3.1.6 fire protection protection afforded to the structural building member by the horizontal protective membrane system such that the temperature on the surface of the structural building member and within the cavity is limited throughout the period of exposure to fire 3.2 Symbols and units | Symbol | Unit | Designation | |--------|------|-------------| | $L_{\text{exp}}$ | mm | Length of the structural building member, plus the horizontal protective membrane, which is exposed to the furnace. | | $L_{\text{sup}}$ | mm | Centre to centre distance between the supports of the structural building member tested. | | $L_{\text{spec}}$ | mm | Total length of the main beams or members of the structural building member. | | $A_{m/V}$ | m$^{-1}$ | Section factor of unprotected steel beam (see ENV 13381-4). | 4 Test equipment 4.1 General The furnace and test equipment shall be as specified in EN 1363-1:2006. 4.2 Furnace The furnace shall be designed to permit the dimensions of the test specimen to be exposed to heating to be as specified in 6.4.1 and its installation to be as described in Clause 7. 4.3 Loading equipment Loading shall be applied according to EN 1363-1. The loading system shall permit loading, of the magnitude defined in 5.3, to be uniformly applied along the length and width of the test specimen at loading points positioned as defined in 5.3. The loading equipment shall not inhibit the free movement of air above the test specimen and no part of the loading equipment, other than at the loading points, shall be closer than 60 mm to the unexposed surface of the test specimen. 5 Test conditions 5.1 General A horizontal structural building member, including any supporting construction, which carries a horizontal protective membrane, to be used as a fire resistant barrier against fire from below, is subjected to predefined loading and to the fire test defined herein. The temperature within the cavity and the surface temperature of the structural building member are measured throughout the test. Any leakage through the structural floor slab and at the sides of the structure shall be minimized. The gap between the floor slab and the furnace shall be made tight by e.g. mineral wool pads or similar in such a way that the slab can deflect vertically. It is recommended that the test is continued until the mean temperature recorded by all thermocouples within the cavity reaches the appropriate limiting temperature of the structural building members used or until any individual temperature recorded within the cavity rises to 750 °C for concrete, steel, or concrete//profiled steel composite members and 500 °C for timber structural members. The procedures given in EN 1363-1 shall be followed in the performance of this test method unless specific contrary instruction is given. Where required, the semi-natural fire test shall be performed in accordance with Annex A. 5.2 Support and restraint conditions 5.2.1 Standard conditions The test specimen shall be tested as a simply supported one way structure with two free edges and an exposed surface and span as specified in 6.4.1. It shall be installed to allow freedom for longitudinal movement and deflection using at one side rolling support(s) and at the other hinge support(s) as shown in Figure 1. The surface of the bearings shall be smooth concrete or steel plates. The width of the bearings shall be at least as wide as the beam. 5.2.2 Other support and restraint conditions Support and restraint conditions differing from the standard conditions specified in 5.2.1 shall be described in the test report and the validity of the results restricted to that tested. 5.3 Loading conditions The test specimen shall be subjected to loads determined in accordance with EN 1363-1. The means of determination of the load shall be clearly indicated in the test report. The applied load shall be calculated such that the maximum bending moment equals 60 % of the ultimate cold condition limit state value of the design moment resistance specified in the appropriate structural Eurocodes (EN 1992-1-1, EN 1993-1-1, EN 1994-1-1 and EN 1995-1-1). The design moment resistance shall be calculated using either the actual or nominal material properties, derived according to 6.5, of the loadbearing member with a material safety factor \((\gamma_m)\) equal to 1,0. The load shall be symmetrically applied to the test specimen either along two transverse loading lines, applied at \( \frac{1}{4} L_{sup} \) and \( \frac{3}{4} L_{sup} \) approximately and separated from each other by a distance of approximately \( L_{sup}/2 \), see Figure 2, or by the use of dead weights. In both cases the loading shall produce stresses approximating to a uniformly distributed load. Point loads shall be transferred to the test specimen, along the two transverse loading lines, through load distribution beams or plates, see Figures 1 and 3, the total contact area between these and the test specimen shall be as specified in EN 1363-1. Load distribution beams, for safety reasons, shall have a height to width ratio < 1. If the load distribution plates are of steel or any other high conductivity material, they shall be insulated from the surface of the test specimen by a suitable thermal insulation material. Unexposed surface thermocouples shall not be closer than 100 mm to any part of the load distribution system. 6 Test specimens 6.1 General One test specimen shall normally be required. Horizontal protective membranes suspended from the structural building member by hangers or similar fixings or attached to the structural building member by a framework structure, would typically be: — ceiling tiles resting on a light supporting frame, — ceiling boards, — metal trays, — plastered and similar ceilings not directly applied to the underside of the structural member. The structural building member to be used in the test shall be as given in 6.4.1 and be chosen from the standard elements described in 6.4.2 and be representative of that to be used in practice. Alternatively the actual structural building member to be used in practice may be used, however the application of the result shall be restricted to that member only. Where a horizontal protective membrane is manufactured with elements or components of variable size or may be installed by different procedures, then a unique test shall be carried out on elements or components at maximum and minimum sizes. The installation procedures for which the sponsor requires approval shall be deemed as being represented by the fire test. The horizontal protective membrane to be used in the test shall be constructed as described in 6.3 and shall be installed according to practice, by the procedures given in the installation manual or other written instruction provided by the sponsor. It shall include all thermal insulating layers or materials to be used in practice within the cavity. 6.2 Fixtures and fittings All fixtures and fittings, such as light fittings, ventilation ducts and access panels expected to be installed, should be included in the test specimen. The installation and frequency of use of these should then if possible be representative of practice. Such fixtures and fittings shall not be installed within the test specimen at a distance of less than 250 mm from any of its edges. 6.3 Horizontal protective membranes The test specimen shall reproduce the conditions of use, including junctions between membrane and walls and edge panels, joints and jointing materials and be installed from below by the same method and procedures as given in the installation manual, or in written instructions, which shall be provided by the sponsor. It shall be fitted with all the components for hanging, expansion and abutting, plus any other fixtures which are to be defined by the sponsor, with a frequency representative of practice. For horizontal protective membranes which are suspended from the structural building member by hangers, the suspension system and the length of the hangers shall be representative of practice. The profiles bearing the various panels shall be installed against each other without any gap, unless a gap (or gaps) is required for design purposes. In this case the gap (or gaps) at the junctions of main runners shall be representative of that to be used in practice and shall be installed within the specimen and not at its perimeter. The profiles within the test specimen shall include a joint representative of joints to be used in practice in both longitudinal and transverse directions. The horizontal protective membrane shall be fixed according to normal practice on all four edges, either directly to the furnace walls or to a test frame. A test frame, where used, shall be fixed directly to the horizontal structural building member being protected, or to the furnace walls. If the construction or properties of the horizontal protective membrane are different in the longitudinal and transverse directions, the performance of the specimen may vary depending upon which components are aligned with the longitudinal axis. If known from experience, the specimen shall be installed so as to represent the most onerous condition by arranging the more critical components parallel to the longitudinal axis. If the more onerous condition cannot be identified, two separate tests shall be carried out with the components arranged both parallel and perpendicular to the longitudinal axis. 6.4 Structural building members supporting horizontal protective membranes 6.4.1 General principles The dimensions of the structural building member supporting the horizontal protective membrane and which is exposed to the furnace shall be: a) exposed length ($L_{exp}$) : at least 4 000 mm b) span ($L_{sup}$) : $L_{exp}$ plus up to 200 mm maximum at each end c) length ($L_{spec}$) : $L_{exp}$ plus up to 350 mm at each end d) exposed width : at least 3 000 mm Test specimens of exposed width less than 3 000 mm may be tested according to this method. However, application of the result shall be restricted to constructions of equal or less width than that tested. The gap between the structural building member and the longitudinal furnace walls or simulated furnace walls shall not exceed 30 mm and shall be sealed with compressed mineral fibres or ceramic fibres of adequate fire performance (or comparable materials of equivalent performance) to allow both deflection of the member under heating conditions and prevention of leakage of hot gases during the test. 6.4.2 Standard horizontal structural building members The following structural building members are considered to be standard for this test method. a) Reinforced aerated concrete slabs on steel beams The structural member shall comprise hot rolled steel 'I' section beams of profiles with section factor $A_m/V$ equal to $(275 \pm 25) \text{ m}^{-1}$ (for three sided exposure) and with a section depth of typically $(160 \pm 5) \text{ mm}$. The grade of steel used shall be any structural grade (S designation) according to the specification given in ENV 13381-4. Engineering grades (E designation) shall not be used. These beams shall be spaced at $(700 \pm 100) \text{ mm}$ centres resting on the bearing surface of the furnace test frame. The beams may be assembled incorporating cross members welded at the ends. The centre of either of the outer steel beams shall not be placed less than 275 mm from the furnace wall in order that the edge of the horizontal protective membrane rests only on the peripheral support. The centre of either of the outer steel beams shall not be placed more than 450 mm from the furnace wall. The reinforced aerated concrete slabs shall be of density not more than 650 kg/m$^3$ and minimum thickness 100 mm and maximum width 650 mm. They shall be placed transversely on the profiles of the steel beams and separated from each other by gaps of 5 mm to 10 mm which shall be sealed with ceramic fibre or equivalent material and silicone flexible sealant. New, unused, reinforced aerated concrete slabs shall be used for each test. The aerated concrete slabs shall rest on the steel beam framework without mechanical connection so that there is no gain in mechanical strength of the structure with increasing deformation. b) Reinforced dense aggregate concrete slabs on steel beams All the principles given in a) for reinforced aerated concrete slabs on steel beams apply except that the concrete slabs shall comprise dense aggregate concrete of density $(2 350 \pm 150) \text{ kg/m}^3$ and shall have a thickness of between 60 mm to 120 mm. c) Timber floors (or roofs) The standard structural building member from which a horizontal membrane is suspended for the protection of a timber structural building member shall comprise equally spaced softwood joists, of nominal density $(450 \pm 75) \text{ kg/m}^3$ and cross-section $(220 \pm 10) \text{ mm} \times (75 \pm 5) \text{ mm}$ at 530 mm to 600 mm centres, see Figure 4. The number of joists (preferably six) and their spacing shall be appropriate to the exposed width, which shall be from 3 000 mm to 3 300 mm. The joists shall be connected by cross members of the same material and cross-section, located in the area of the furnace support. They shall also be connected by cross members of the same material but with cross-section $(175 \pm 10) \text{ mm} \times (40 \pm 5) \text{ mm}$, located around mid span, see Figure 4. The wooden floor shall be made from particle board sheets of thickness $(21 \pm 3) \text{ mm}$ and density $(600 \pm 50) \text{ kg/m}^3$, laid perpendicular to the joists, with tongue and groove joints and nailed down. d) Concrete/Profiled steel sheet composite slabs The standard concrete/profiled steel sheet composite test slab shall be prepared according to the specification given in ENV 13381-5. The grade of steel and the concrete type, composition and strength shall be as specified in ENV 13381-5. The standard concrete/profiled steel sheet composite slab shall be fixed to and supported on two equally spaced steel beams with a representative span as specified in 6.4.1. Hangers may be provided on the unexposed side in order to avoid collapse of the structural member under test during the test. 6.5 Properties of test materials Where appropriate, the actual properties of materials used in the structural building member tested (e.g. concrete strength) shall be determined according to EN 1363-1 or using an appropriate product test standard, e.g. concrete strength. Otherwise nominal values, e.g. for steel or wood based materials, may be used. The dimensions of the structural building member used shall be measured. The material composition of the horizontal protective membrane shall be specified by the sponsor. For confidentiality reasons the sponsor may not wish detailed formulation of composition details to be reported in the test report. Such data shall, however, be provided and maintained in confidence in laboratory files. The actual thickness, density and moisture content of the components of the horizontal protective membrane shall be measured and recorded just prior to the time of test, on the components themselves or on special test samples taken from the test component. These shall be conditioned as defined in Clause 8. The procedures appropriate to different types of material are given in Annex B. The thickness of sprayed or coated, passive or reactive type fire protection materials when used as component parts of horizontal protective membranes shall be measured at locations on the horizontal protective membrane corresponding to each of the thermocouple locations $T_1$ to $T_9$ ($T_{12}$), defined in 9.3.2 and Figure 5, according to Annex B. The thickness shall not deviate by more than 20% of the mean value over the whole of its surface. The mean value shall be used in the assessment of the results and the limits of applicability of the assessment. If it deviates by more than 20%, the maximum thickness recorded shall be used in the assessment. The density of the horizontal protective membrane and its components, at minimum and maximum thickness, shall be measured according to Annex B and recorded. The density should not deviate by more than 15% of the mean value. The mean value of density shall be used in the assessment of the results and the limits of applicability of the assessment. If it deviates by more than 15%, the maximum density recorded shall be used in the assessment. 6.6 Verification of the test specimen An examination and verification of the test specimen for conformity to specification shall be carried out as described in EN 1363-1. The properties of the materials used in the preparation of the test specimen shall be measured using representative samples, where necessary, as described in 6.5 using the methods given in Annex B. The sponsor shall verify materials contained within the test specimen which are applied by spray or coating for compliance to design composition and specification using tests appropriate to the material under test.
I. Call to Order The meeting was called to order at 4:30 P.M. by Chair Spencer. Attendance was called and a quorum of four was present. II. Attendance Advisory Committee Patricia Spencer – Chair Paula Rogan – Vice Chair Florence “Dusty” Holmes Ron Jefferson Vacancy Staff Michelle Arnold – PTNE Director (Excused) Dan Schumacher – Project Manager Rosio Garcia – Administrative Assistant (Excused) Landscape Mike McGee – Landscape Architect, McGee & Assoc. Mike Patterson – Grounds Maintenance, Mainscape Other Wendy Warren – Transcription, Premier Dawn Breheny Barnard - Resident III. Pledge of Allegiance The Pledge of Allegiance was recited. IV. Approval of Agenda Chair Spencer moved to approve the Agenda of the Golden Gate Beautification M.S.T.U. Second by Ms. Holmes. Carried unanimously 4 - 0. V. Approval of Minutes May 17, 2022 Mr. Jefferson moved to approve the minutes of the May 17, 2022, meeting as presented. Second by Chair Spencer. Carried unanimously 4 - 0. VI. Landscape Maintenance Report – Mainscape Landscaping Company Mr. Patterson reported: - Routine landscape maintenance is being undertaken on schedule. - The replacement plant proposal was approved, and material ordered. Installation will be scheduled on receipt of plants, currently anticipated week ending August 27, 2022. - Dead Juniper on Sunshine Blvd., Median #2, was cut out and bed cleaned. - Bougainvillea shrubs continue to decline. *Mr. Schumacher will apply Wet & Forget product to pavers in a single median as a test for effectiveness at the end of the rainy season.* VII. Landscape Architect’s Report – McGee & Associates Mr. McGee summarized the “Golden Gate Landscape Observation Report FY22” dated August 2, 2022. **General** Yellow African Iris are not doing well. Recommend Mainscape’s Agronomy Group evaluate the plants’ health and implement corrective treatment or fertilization program. **Tropicana Boulevard** - Median #1: Prune dwarf Jasmine sway from plants and off tree trunks. - Median #3: - Remove volunteer Blueberry Flax. - Replace five (5) yellow African Iris. - Median #5 - Replace missing or damaged White African Iris. - Remove volunteer Crape Myrtles from Bougainvillea bed. **Sunshine Boulevard** - Median #1: - Replace dead Sabal palm - Replace, under warranty, five (5) yellow African Iris. - Median #2: - North end – Prune out dead and brown foliage in Juniper Parsonii. *Contractor treated approximately thirty-nine (39) – forty (40) Juniper for Blight disease.* - Address 2248 – Replace eight (8) declining yellow African Iris. - Remove center metal support poles from Tabebuia trees. Reposition and loosen staking straps so not to girdle the tree trunks. - Median #6: - Address 1770 - Replace twelve (12) yellow African Iris and remove volunteer Blueberry Flax sprouts. - Address 1720 - Replace twelve (12) yellow African Iris. **18th Avenue S.W. Median** - Mow and/or spray weeds with herbicide. **Coronado Parkway** - All locations: - Recommend developing recovery fertilization plan for all Paroutis palm clumps per UF/IFAS Extension recommendations for deficiencies as specified in the summary report. • Median #3 o Remove and schedule replacement of southernmost Alexander Palms. • Median #4: o Replace Yellow Flag Iris, installed incorrectly, with Yellow African Iris. • Median #7 o Remove Brazilian Pepper from Firebush. • Median #10 o Heavily prune back all Muhly grass. o Address 5237 - Fill in missing Perennial Peanut plants; quantity required fifty (50). Hunter Blvd. • Median #2: o Recommend removal of Big Rose Crown of thorn and continue planting Perennial Peanut. Estimated number of plants needed two hundred twenty-five (225). • Median #3: o Address 2330 – Replant twenty-five (25) Bougainvillea ‘Silhouette’ shrubs. o Address 2337 & 2340 - Replace twenty-three (23) Ms. Alice Bougainvillea shrubs; install twenty (20) additional shrubs on south end of bed. • Median #4 o Remove Brazilian Pepper in Saw Palmetto plants. Keep clear of the pump station. o Cut back and remove Palmetto stems and foliage within three (3) feet of the pump station box. • Median #6 o Remove wood debris from median. o Address 2018: Replace missing Jatropha tree, six (6) foot height. o Address 2007: Replace thirty (30) missing or declining Society Garlic plants. • Median #7 o Address 1980: Replace missing Alexander palm. o Address 5261: Remove volunteer plants in Saw Palmetto plants. o Address 5261: Remove volunteer Schefflera plant growing on Sabal palm. Mr. Schumacher will submit a list of plant items for replacement to Mainscape Landscaping and Request a Quote. Water Usage June and July 2022 water use per WeatherTrak controller estimates: • Tropicana Boulevard: June - 300,499 gallons; July – 334,790 gallons. • Sunshine Boulevard: June- 412,097 gallons; July – 401,237 gallons. * • Coronado Pkwy & Hunter Boulevard: June - 308,222 gallons; July - 363,165. * The Measured Usage History Report recorded a six (6) day gap. A median pump station component, the variable frequency drive keypad, failed and a new control pad was installed by Naples Electric Motor Works. VIII. Project Manager’s Report A. Budget Report Golden Gate MSTU Fund Budget 153 dated August 16, 2022 • The FY-22 Millage rate remains constant at 0.5000 mills. • Current Ad Valorem Tax, Line 1, is $533,600.00; an increase of 8.10% over FY-21. • Transfers and Contributions, Line 13, are $1,136,560.68; a carry-over of unexpended FY-21 funds. • Total Revenue, Line 14, is $1,675,060.68, including investment interest, transfers, and contributions. • Purchase Orders: (Contractors) ➢ Hart’s Electrical – Lighting Maintenance & Repair. ➢ Howard Fertilizer – Landscape Fertilizer. ➢ HydroPoint Irrigation – Cloud Software Renewal. ➢ Mainscape Landscaping - o Incidental is for landscape refurbishment and miscellaneous. o Grounds Maintenance includes irrigation repairs. ➢ McGee & Associates – Landscape Architecture. ➢ Naples Electric Motor Works – Pump Station Repair. ➢ Premier Staffing – Transcription Services. ➢ SiteOne Landscape Supply – Irrigation Parts & Pumps. ➢ Varian Construction – Bus Shelter Repainting. • Red indicates the Purchase Order is closed and the money expended. • Operating Expense, Line 31, is budgeted at $422,060.68; with current Commitments of $118,155.99, Expenditures of $199,764.95, and a Budget Remainder (unspent operating funds) of $104,139.74. • Capital Outlay, Line 33, budgeted at $1,181,200.00, is available to fund planned long term projects, consistent with the M.S.T.U. ordinance and upon a motion from the Advisory Committee. • Transfer to Fund 111, Line 34, in the amount of $56,000.00, is for MSTU Staff salaries and accrued County overhead related to M.S.T.U. operations. • Transfer to the Property Appraiser, Line 35, in the amount of $4,500.00, is for computation of MSTU Ad Valorem data for the tax rolls. • Transfer to the Tax Collector, Line 36, in the amount of $11,300.00, is for collection of M.S.T.U. millage as part of the tax bill, currently 0.5 mills. • Total Budget, Line 38, lists FY-22 MSTU budgeted funds at $1,675,060.68; with tabulated Commitments of $118,155.99, Expenditures of $270,339.47, and a Budget Remainder (total unspent funds) of $1,286,565.22. The $1,675,060.68 amount does not change during the fiscal year. *Unexpended FY-22 funds will be carried-over to the FY-23 budget and recorded as a line item under Transfers & Contributions.* B. Committee Application(s) Mr. Schumacher reported there are two (2) advisory seats available on the Golden Gate Beautification M.S.T.U. Advisory Committee. The four (4) year terms commence October 2022. Ron Jefferson is eligible for reappointment to the Advisory Committee for a four (4) year term. His application has been received. Oscar Marimon applied for consideration to fill the Advisory Committee vacancy. *Mr. Schumacher will confirm his residency is within the M.S.T.U. property boundaries and invite him to attend the September 2022 meeting.* Candidates’ applications will be considered by the Committee for recommendation to the Board of County Commissioners at the September meeting and placed on the BCC’s October 2022 Consent Agenda for approval. C. Replacement Bridge – Golden Gate Pkwy over the Santa Barbara Canal The plans include replacing the three existing bridges with a single bridge including shoulders and sidewalks on each side. Improvements are anticipated to enhance mobility and maintain connectivity for vehicles, bicycles, and pedestrians. Railing Design - The upgraded Sunshine Motif Infill Panel for the railing, funded by the M.S.T.U., has been ordered at a cost of approximately $45,000.00. - Funds for the railing upgrade will be expensed from the M.S.T.U. Capital Outlay budget after installation, in the later stages of bridge construction. Removing the existing bridge and installing the new bridge is estimated to take one (1) year and will commence in October 2022. One lane of bridge traffic will be maintained throughout construction, with Coronado Pkwy and Sunshine Blvd available as alternate routes. Mr. Schumacher notified Mainscape Landscaping to prune hedges away from the curbs on Coronado Parkway in consideration of increased traffic during construction. D. Canal Bridge Location – Golden Gate Pkwy & CR-951 The Board of County Commissioners (BCC) approved the location of the Canal Bridge at the intersection of 27th Avenue SW and Collier Boulevard-951. A graphic was distributed showing the bridge location. Mr. Schumacher will invite Lorraine Lantz, Principal Planner, to the September meeting to update Committee members on activities in Golden Gate. E. Secondary Bridge Locations – Repainting A Road Maintenance Division project to water blast and seal nine (9) secondary bridge locations is under review. A street map with secondary bridge locations highlighted in purple was distributed. - Mr. Schumacher contacted Mike Stone, Sr. Field Inspector for the project, to share the M.S.T.U.’s interest in repainting the bridges. - Noting Collier County does not have a painting contract on file, Daryll Richard, Landscape Architect, Florida Department of Transportation (FDOT) was contacted for guidance on the painting standards for bridges. - Mr. Schumacher noted the M.S.T.U. would consider making a capital contribution to the project, with the Road Maintenance Div. responsible for ongoing maintenance. IX. Old Business None X. New Business None XI. Public and Board Comments Dawn Barnard commented on the East Naples Civic Association’s experience with banner flag restrictions on light poles. Staff suggested contacting Florida Power & Light (FPL) to determine if a removable “clamp on” mechanism would be acceptable to display flags during holiday periods. XII. Adjournment There being no further business to come before the Committee, the meeting was adjourned by the Chair at 5:22 P.M. GOLDEN GATE MSTU ADVISORY COMMITTEE Patricia Spencer, Chair The Minutes were approved by the Committee on 9-13-2022 as presented √ or as amended _______. NEXT MEETING: SEPTEMBER 20, 2022 – 4:30 PM GOLDEN GATE COMMUNITY CENTER 4701 GOLDEN GATE PARKWAY NAPLES, FL 34116
THE ASSUREDS A Quarterly Publication of The Singapore Insurance Employees' Union International Women’s Day 8 March 2014 The Assureds is published by Singapore Insurance Employees' Union MCI (P) 123/04/2013 190 Middle Road #10-07 Fortune Centre, Singapore 188979 Tel: 6337 0273 Fax: 6336 2008 www.sieu.org.sg Education & Publicity Committee Luat Hee Editor Willie Tan Chairman Priscilla Tan Vice Chairwoman Athar Afzal Secretary Jackie Choy Asst. Secretary Ng Siew Ling Member Rosalina Ya’cuf Member Alex Chua Member Nancy Lee Member Alice Low Member Valerie Ho Member Muiz Azmi Bin Abdul Aziz Member Sunila Raj Member International Women’s Day Celebrations 2014 2 UNI-APRO P&MS Conference 4 Leadership Development Initiatives Gather Pace 5 SIEU Seminar 2013 6 Cooking Demo & Hi-Tea for SIEU 8 Quotes 11 Winners Only 12 aspiring for a career breakthrough? over 1,000 professional courses with up to $250* in Union Training Assistance Programme (UTAP) *Terms and conditions apply. Illustration of how UTAP makes your training more affordable: | Course / Exam Fee | M9A - Life Insurance and Investment-linked Policies II | Effective Presentation Skills | Early Years Development Framework Training | |-------------------|------------------------------------------------------|-------------------------------|------------------------------------------| | | $100 | $600 | $350 | | Government Subsidy| - | - | $315 | | Co-pay from UTAP | $50 | $250 | $17.50 | | Co-pay by Member | $50 | $350 | $17.50 | Visit http://skillsupgrade.ntuc.org.sg to find out more about UTAP. For more information, please contact NTUC Member Services Centre 1 Marina Boulevard B1-01 One Marina Boulevard, Singapore 018989 Operating Hours: Mon-Fri: 9:00am to 6:30pm Sat: 9:00am to 2:30pm Closed on Sundays and Public Holidays Hotline: 65162138008 Email: firstname.lastname@example.org PROTECTION PROGRESSION PLACEMENT PRIVILEGES NTUC Membership a Labour of Love for U Visit ntucmembership.sg for more details. INTERNATIONAL WOMEN’S NTUC Downtown Guest-of-Honour: Ms Diana Chia, President of NTUC DAY CELEBRATIONS 2014 East at D’Marquee Special Guest: Mr Lim Swee Say, Secretary-General of NTUC The Singapore delegation of 32 members from the 9 unions attended the Professionals & Managers Staff Conference in Kuala Lumpur, Malaysia, from 2 to 3 December 2013. The conference was attended by 113 unionists from the Asia-Pacific region. UNI-Singapore Liaison Council President John De Payva as the President of UNI-APRO P&MS Committee, tabled a proposal to rename “Professionals & Managers Staff” to “Professionals & Managers Group”. The proposal was endorsed by the conference. He shared with delegates on the demographic changes of Singapore’s workforce and the Singapore unions’ outreach initiatives to P&MS. He also discussed the changes in our Employment Act and Industrial Relations Act that have been formulated in response to such workforce profile changes. At the conference, Dr Pavinder Kler from Griffith University in Australia presented his report. He argued that unions still have a relevant role to play in improving the workplace security of P&M talent, both in their home countries as well as in overseas bases where they are increasingly being located. This would require union action to be both locally based and internationally focused, so as to ensure a set of shared standards that can be applied globally. The report opined that organized labour needs to make progress on the following areas in order to integrate P&M talent successfully with the unions: 1. Build trust and empathy with P&M talent 2. Build cordial relationships with employers, governments and educational centres 3. Ensure unions have a greater say in setting shared international standards by encouraging a bottom-up approach in order to complement the current top-down approach preferred by governments 4. React quickly to changing events so as to be always at the forefront of policy-making 5. Ensure that unions which may not seem to share many commonalities with each other are active members of union umbrella bodies such as the UNI. At the conference, the following unionists were elected into the UNI-APRO P&M Committee: **East Asia:** Member: Akira Yoda, Director NWJ Substitute Member: To Be Advised **South East Asia:** Member: John De Payva, SG SMMWU Substitute Member: A female representative from UNI SLC Member: Ng Choo Seng, GS ABOM Substitute Member: Ng Peng Ho, GS UEC **South Asia:** Member: Karthik Shekar, GS UNITES Substitute Member: To Be Advised One Reserved Seat for Women, To be nominated by UNI-APRO Women’s Committee **Ex-Officio Members:** Christopher Ng, Regional Secretary UNI-APRO Pav Akhtar, Director of UNI P&M Jayasri Priyalal, Director of UNI-APRO P&M Leadership Development Initiatives Gather Pace By Bro K S Thomas SIEU’s 3F initiatives notched up another milestone on 19th and 21st February when 31 budding leaders of the Union attended a 2-day session on Employment Laws. It was conducted by Mr Loh Oun Hean who had spent 30 years in the Ministry of Manpower, Maybank Singapore, Singapore Airlines and Deloitte Southeast Asia. His extensive experience covered human resource, industrial relations, corporate planning, consumer banking and banking operations. The focus was on issues pertaining to prevailing Employment Laws. Particular attention was paid to the latest amendments to the Employment Act as well as the impending changes to the Industrial Relations Act. Of great interest were changes that will have bearing on PMEs who constitute a growing segment of the Insurance industry. SIEU’s 3F initiatives are part of a systematic 3A plan for budding leaders to: 1. APPRECIATE the past; strengths and challenges; relationships (both bipartite and tripartite); etc; 2. ANALYZE for themselves all the intricacies of these elements and formulate an action plan (outcome / output); 3. ADVANCE towards the future with competence, confidence and cohesiveness. There will be 6 more days of training this year on these topics: 16 May (Friday) & 23 May (Friday) – “Wage negotiations” 15 Aug (Friday) & 22 Aug (Friday) – “Grievance handling, discipline, termination and dismissal” 14 Nov (Friday) & 21 Nov (Friday) – “Counselling skills for union leaders” The next step in preparing for SIEU’s future will be a Membership Workshop to plan for growth strategies. SIEU will be turning 60 in 2015. SIEU has come this far because it has been endowed with visionary and dedicated leaders at all levels of the Union. We are confident of overcoming challenges in the horizon with gumption and fortitude. “FAILURE IS NOT AN OPTION AND NEVER HAS BEEN FOR SIEU”. SIEU Seminar 2013 The annual SIEU seminar is not only a get-together for Branch Chairpersons from the various Insurance Companies and members of the Executive Council, but it is also an opportunity for their families and friends to be acquainted with the people whom their spouse, parent or children work closely with throughout the year. For year 2013, SIEU broke away from tradition and held its annual seminar on the Seas with the Royal Caribbean Cruise from 2nd to 6th December 2013. During the seminar, participants had lively and interactive discussions. They also re-visited and re-affirmed SIEU’s Mission and Vision as well as addressed future challenges which SIEU’s leaders might face. No seminar is complete without some R&R. Other than the exciting cruise and tours onboard the Mariner of The Seas®, there were also ports of call at Penang and Phuket. We were fortunate that the weather was fine throughout the days. We thank all Officials, Branch Chairpersons, Delegates and those whose participation and contribution made the 2013 seminar a success. We look forward to another exciting and fruitful seminar in 2014! From all of us, Seminar Committee 2013 The 2015 Annual General Meeting was held on 18th July at The Ritz-Carlton, Millenia Singapore. The meeting was well attended by members and guests. The meeting was presided over by the President, Mr. Chong Seng Heng. The meeting was followed by a dinner cruise on the Singapore River. 1) As part of SIEU’s 3F initiatives, name the systematic 3A plan for budding leaders? (a) ____________________________ (b) ____________________________ (c) ____________________________ 2) UTAP makes your training more ______________. 3) The SIEU seminar 2013 was held in month of ____________. 4) The UNI-APRO P&MS conference was attended by 113 unionists from the ______________ region. 5) President John De Payva shared with delegates on the _________ changes of Singapore’s workforce. 6) The Professionals & Managers Staff Conference was held in ______________. Rules: • The words appear straight across, back-word, up & down and diagonally. • WordSearch puzzle must be completed in order to be eligible for the contest. • The contest is open to all members except Officials and Executive Council Members (Branch Chairman) of the Union. • Only ONE (1) entry per person. Any attempt or suspected attempt to enter more than once per person, shall be deemed as tampering and will void all of your entries. • The first 20 correct entries drawn by the Executive Council will each receive $30 FairPrice Gift Voucher from the Union. • Closing date for contest - 15 May 2014. Congratulations! Winners for 101st Issue 2013 Leow Lee Lee Sxxx7862D AIG Asia Pacific Insurance Ting Siew Ling Sxxx7876B NTUC Income Fang Mei Yun Gxxx5498P Liberty Insurance Hasna Buang Sxxx512TZ UOI Loke Kit Yooi Janet Sxxx5992I UOI Sharifah Yasmin Arfah Sxxx5921F Tokio Marine Insurance Yap Lucy Sxxx6500H Manulife Tan Chin Hock Vincent Sxxx6951A NTUC Income Yuen Kam Yee Sxxx3246H NTUC Income Chua Siaw Ling Vivien Sxxx9729C Liberty Insurance Tan Kwee Eng Emily Sxxx8730A NTUC Income Chew Gee Soon Sxxx4045C First Capital Insurance Koh Shu Chen Sxxx6556D AIG Asia Pacific Insurance Ho Seng Lian Sxxx2615H Aviva Ltd Hwang Gek Hong Sxxx7478E AIG Asia Pacific Insurance Koh Siew Eng Sxxx1768D Liberty Insurance Yeo Puay Wah Jennifer Sxxx8705I Liberty Insurance Teng Phek Tin Sxxx0909E AIG Asia Pacific Insurance Cheung Shuk Chee Amy Sxxx3910E NTUC Income Ong Huay Chin Sandy Sxxx0530F NTUC Income NTUC FairPrice Gift Vouchers worth $30 each Answers for “SIEU & You” Quiz Contest 101st Issue 2013 1. Devan Nair 2. Greeting 3. Bangkok 4. Unionists 5. Vietnam 6. Brainstorm 7. ODC 8. Teamwork 9. Smile 10. Nose 11. Diary 12. Young 13. Towel 14. Stamps 15. Soak Please tell us the changes in your contact details. You may return this slip to 190 Middle Road, #10-07, Fortune Centre, Singapore 188979 or email us at email@example.com Name: Mr/Mrs/Miss/Mdm _________________________________________ NRIC:_____________________ Company: _________________________________________ Office Tel: ___________________ Mobile No:_____________________ New Address: _________________________________________ Postal Code: ___________________ Email:_______________________________________________________ Send in your child’s photo (must be below 5 years old) or your recent wedding photo and if it gets printed, you will walk away with a $30 NTUC FairPrice Voucher! Congratulations... Julie Lim Ru Yi NTUC Income Playtime Combo $17 for 2 hours during peak period in May 2014 Birthday Parties Book Exciting Themed Birthdays & Celebratory Events! Field Trips Join Us for Excursions & Learning Journeys! PLEASE STATE THE PROMO CODE SIEU14 for membership discount at just $10 (U.P. $18) Not applicable with other promotions. Explore the World the Fun Way! Mega Play Explore Our Multi-Level Obstacle Play System! Fun-filled Activities Sign Up for Special Workshops in Crafting, Baking & Much More! Kayla Enzo www.explorerkid.com Another leisure and lifestyle choice by NTUC Club
The Comedy of Doctor Foster A. Colin Wright paperbytes Other titles in this series Already published: Louis Fréchette. *On the Threshold* (tr. Bernard Kelly) Cary Fagan. *What I Learned in Florida* Michael Bryson. *Light and Silver* Forthcoming: Tobias Chapman. *Plymouth* Myles Chilton. *The Local Brew* Mary Frances Coady. *The Poor* Dave Hazzan. *The Rise and Fall of Dennis Mitchell* Lisa Lebedovich. *Stories from a Photograph* Catherine Leggett. *The 401* Robert Lindsey. *Another Opportunity for Personal Development* Bill MacDonald. *A Summer at La Rochelle* Steve Owad. *Going Places* Novid Parsi. *How His Little Girl Died* Uday Prakash. *Duty Officer: Duddo Tiwari* (tr. R. Hueckstedt) The Comedy of Doctor Foster A. Colin Wright paperbytes Copyright © 1999 by A. Colin Wright All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means — electronic or mechanical, including photocopy, recording or any information storage and retrieval system — without written permission from the Publisher, except by a reviewer who wishes to quote brief passages for inclusion in a review. Design: Perkolator {Kommunikation}. Typeset in Minion. Cover photo: Bernard Kelly Published by paperbytes an imprint of paperplates books 19 Kenwood Avenue, Toronto, Ontario, Canada M6C 2R8 email@example.com www.perkolator.com The Comedy of Doctor Foster Doctor Foster went to Gloucester In a shower of rain, He fell in a puddle up to his middle And never went there again. Those who maintained that Dr. Foster’s demise was the result of a pact he’d made with the devil were, quite frankly, mistaken. They were members of the congregation of St. Joseph’s Church, Wittenberg, Ontario, and they had cause to remember Dr. Foster with some alarm. But the people of St. Joseph’s didn’t know everything, or even – with the exception of the rector – very much at all. As they said, “There’s no smoke without a fire” – but it wasn’t hell-fire that Foster was involved with. It’s as well to set down the facts. Foster was not on his way to Gloucester but merely to Toronto. It wasn’t raining at the time but had been one of those days in mid-summer when warm sun alternated with violent thunderstorms, and it had rained shortly before he set out. The “puddle” was one of those insignificant rivers one finds along the 401, and in which Foster’s car had landed after going out of control. He died instantly, and thus not only did he never go to Gloucester – Toronto – again, he never even reached it. His name, however, certainly was Foster: John Foster, B.A., M.A., Ph.D. The rhyme? Well, some malicious person scrawled it on the wall of St. Joseph’s the day before the funeral, and it was later suggested as a fitting epigraph for Foster’s tombstone. For Foster will never be forgotten by that long-suffering congregation, whose only option over the years had been to show Christian endurance towards the man. The obituary notice in *The Wittenberg Torch* stirred up further animosities. “Written by one of his colleagues,” Major Austin told everyone he met: “praising the originality of his thought or something. Which only goes to show the preposterous ideas that are taught to young people nowadays. I read it until I got to the part about Foster being an expert on Nitchy.” (He meant Nietzsche.) “Nitchy, I ask you!” The remark fell flat, as nobody else, except for the rector, was sure who Nietzsche, or Nitchy, was. Twenty-four years earlier, when Foster had first come to the nearby university, he’d already had a reputation as a scholar. He also published novels, under a different name, and was a competent amateur artist too. (“Something must have gone wrong since,” the rector’s wife would later say. “Even I could paint better than that.”) In those early years, as a few old-timers remembered, Foster had been one of the pillars of the community and of St. Joseph’s in particular. True, he had his oddities. Then in his fifties, he was divorced and so not quite respectable. He read books which couldn’t be approved of. He showed a singular indifference to the niceties of parish behaviour by attending church in baggy trousers and a jacket with patches on the elbows, and those who sat near him maintained he ostentatiously left out certain sentences from the Creed. But he attended regularly, and no one paid him much attention. And then it started: in the summer of 1960 to be exact, twenty-two years before Foster’s death. He was looked after by a Mrs. Wignall, who came to clean for him twice a week. No harm there, for she was a good soul and likewise a respectable member of St. Joseph’s. But suddenly she fell sick, and Foster used that as an excuse to replace her. By a blond creature, a foreigner, in her twenties or younger, and with a figure … well, the male members of St. Joseph’s would, in their jocular, broad-minded moods, describe it with whistles. What’s more, she lived in. Now, no one could prove anything. But they talked and shook their heads. Some even sniggered. Only, they couldn’t interrogate the girl directly, or have the pleasure of snubbing her, because she didn’t come to church at all – and Foster, when tackled discreetly on the subject, laughed the whole thing off. But already there were murmurs about such things being a threat to the moral fibre of the community. Actually, the rector met the girl and reported that she was charming, intelligent, and seemed happy working for Foster – and refused to speculate further on their relationship. Others were dissatisfied, saying the rector wasn’t sufficiently on guard against sin. (That was the old one, of course – there have been two more since – but all of them for some reason stood up for Foster, even when it became obvious to everyone else that he was an evil man.) Anyway, later that year the girl was obviously pregnant. Foster seemed cheerfully unrepentant and just answered “yes” when someone asked him about it. No shame at all, and now people started avoiding him even in church, which he didn’t seem to mind. The girl finally went away somewhere, and never returned. Foster carried on as usual, or rather, worse than usual. It was one thing to live in sin with an outsider whom nobody really cared about, but quite another to seduce the organist’s wife. Oh, the affair didn’t last long and the stupid woman soon went back in tears to her husband, but the effect on the congregation was shattering. The more so because even after her husband took her back in loving forgiveness she refused to show repentance or to say a bad word about Foster. She seemed almost willing to run to him again and, for all the wrong he’d done her, to offer him the other cheek (or whatever part of the anatomy was involved). It was now that Foster began to get objectionable. He’d always sworn a little. Now he swore a lot. He pretended there was no such thing as “good” or “bad” language (except when it was ungrammatical): just that certain language was appropriate for some contexts and not for others. When someone objected that swearing was morally reprehensible because it took the name of the Lord God in vain, he countered by asking why it was all right to say “my God!” or “heavens!” and not “Christ!” or “Jesus!”; or why, in Spanish, even an archbishop could use Christ, the Virgin Mary and all the saints thrown in as a simple expression of surprise, which no one took amiss. When someone else claimed that swearing reduced everything to the unpleasant aspects of life, he asked what was unpleasant about shitting and fucking, adding that he enjoyed both. “It’s not the meaning people are afraid of,” he’d explain as though the worthy members of St. Joseph’s were mere students, “but the sound of it. And that, ladies and gentlemen, is nothing but magic: belief in the power of the word.” Foster didn’t swear indiscriminately, rejecting this as debasing the vitality of the language, which had to be used with precision. He did indeed swear with precision. He was vulgar with precision. Called things by their names with precision. Once in church, after a piece of unusual metaphysical nonsense in the rector’s sermon, farted with precision. He told Constance Nightingale, a neurotic spinster in her forties, to take her pants down and have an affair. Then, worst of all, he seduced the seventeen-year-old daughter of one of the tediously married sidesmen. He became a problem. The rector couldn’t turn him away from St. Joseph’s and in any case was convinced the church was for sinners rather than the righteous (a sincere if naive man, the rector). And over the next two years Foster had even greater success. “St. Joseph’s is becoming a congregation of cuckolds,” Major Austin commented with his usual bluntness – causing the rector, the only one who knew that Joseph was actually the patron saint of cuckolds, to suppress an inappropriate chuckle. Before Foster’s break with the church the majority of the congregation had come to hate him. They could have forgiven him nice, respectable sins. They could perhaps have forgiven a certain sexual licence, provided it were discreet, as a kind of childish last fling before he entered his golden years (or old age, as he indelicately called it). What they couldn’t forgive was his threatening all their cherished ideas. “Of course I’m a threat,” he would roar. “Why is it that Christians have to be so goddamn dull? Do you think Christ wanted a religion of ass-sitters? I’m more Christian than all of you. Read Kierkegaard!” “Kierke-who?” asked Major Austin, who was deaf. “Is he swearing again?” More offensive than anything was the fact that Foster was so obviously enjoying himself. Whereas the others were supposed to be living in God’s grace, it was Foster who was happy; Foster who, they said, couldn’t really believe in religion at all. Why did he come to church? The answer was supplied by the eighteen-year-old son of the Harrisons. A pleasant couple, but their son had “got” religion and had recently written a letter to the Torch saying he was seriously disturbed about the moral standards of the community because a strip-tease was being performed in one of the local hotels. Anyway, he’d just entered theological school in the university and thought he knew everything. “Obviously,” he said, “Foster must have sold his soul to the devil. And he’s making witches of all his, hm, paramours.” Now, although the idea was ridiculous, there was a spark – a very tiny one – of truth in it. And it caught on because of a comment Foster made the very next Sunday at coffee hour, after the boy who’d got religion started to talk about stories of pacts with the devil. “That’s all nonsense,” Foster said to the boy with an odd look in his eye. “Have you ever stopped to consider the idiocy of making a pact with the devil? Why the devil? When God is omnipotent, why not ask Him to grant your requests? More effective, and incomparably better as life insurance.” “Man’s desires are often evil,” said the boy who’d got religion. “God can only work good.” “That’s simple-minded theology, young man.” Foster was now the serious professor. “God’s omnipotent, the sole source of power. Man cannot limit Him to his own ideas of good and evil, which are hopelessly muddled. God gave man the world to enjoy, and the desire to do so. Won’t God then grant man his desires?” “Not if they’re evil.” “Don’t you think that God might grant man’s true desires, for the love of man – rather than this far-fetched devil creature wanting souls to torture? In my experience, man’s desires are evil only when they’re petty and short-sighted. But his true desires are no more than the natural demands upon life that God’s given him.” “So what are they?” “To experience and know God’s world to the full. To share love with all, both spiritually and sexually …” The boy pounced. “Spiritually yes, but sexually no. We’re told to renounce the flesh!” “Are you sure that’s what Christ tells us?” Foster asked. “He says only that the flesh profiteth nothing – in the sense that human power is helpless as compared with spiritual power. Oh, I know that church Christianity has always insisted on the renunciation of human desires as the ultimate virtue. I disagree with the church.” “But that’s terrible,” twittered Constance Nightingale, who’d just joined them. “How can you possibly disagree with the church?” Foster didn’t deign to reply. “Then,” the boy went on, “there’s no such thing as evil?” Foster reflected. “What’s evil is man’s attachment to pseudo-desires, which seem important only because, unless you have God’s help, they’re easier to achieve. Making money, for its own sake, is a pseudo-desire: the real desire is still for other things, excitement, adventure, security – which in itself is only freedom from fear. If you can have them, your true desires, then money itself is unnecessary. Stealing is similarly evil, because it arises from this pseudo-desire for money. Love of material possessions is evil – didn’t Christ himself say that? – only of course it’s so much easier to flaunt your luxurious houses than to live, which involves risk. Fear can be evil. Love of power over others, violence and murder: all are evil, but again they’re pseudo-desires, to compensate for a lack of love and our wanting just to be recognized by others – which we all desire but find difficult to achieve.” “But stealing other people’s wives?” the boy insisted. Foster dismissed the objection. “A wife isn’t a possession to be stolen. If you consider sex as wrong, but made permissible by limiting it to couples with property rights over each other, then obviously sex with anyone other than your spouse is wrong too: a position the church has adopted since the days of St. Paul” – (Foster walked out of church during certain epistles) – “while ignoring the far more insidious sin of coveting one’s neighbour’s possessions, which our whole advertising industry encourages. If, on the other hand, you consider sex as a natural, God-given expression of communion and enjoyment, to be shared as Christian love is to be shared – and who would dream of making that exclusive? – then the only ‘wrong’ is the hurt caused to others: but this is based on a human vice, jealousy, which in turn is based on fear.” A few of the more thoughtful of those present were uncomfortable, but the majority were horrified, particularly the boy who’d got religion. Foster concluded by saying that God united flesh and spirit, was as much at home with paganism as with church Christianity, in both of which there was evil as well as good, and that these ideas could be found in any number of writers, too. “Blasphemy!” Major Austin snorted. “Religious anarchy, sexual anarchy, moral anarchy! Why, if this fellow continues making such a noise about things, he’ll scare everyone away and where will our property values be then?” “Blasphemy,” echoed Constance Nightingale. “Rather wicked, don’t you think?” “Blasphemy,” said the boy who’d got religion. “Can he be excommunicated or something?” “Blasphemy,” said the rector doubtfully. “I suppose so.” “He’s obviously in league with the devil,” the boy continued. “Did you notice how he talked about making pacts? I tell you, he’s sold his soul.” At this period Foster was doing a lot of writing, and painting too. Not many knew about it, because the people of St. Joseph’s didn’t know everything. One might have wondered how he found time to sleep, for his sexual romps continued as usual. And he still taught in the university, where he was adored by his students (even though they found his standards too exacting), loved by some colleagues, who regarded him as a genius, and hated by others, who considered him subversive. His life, it seemed, would burn out from its very intensity, but in fact the opposite was the case: he was in perfect health, tremendously vital, and creating picture after picture, writing page after page. “Fucking woman after woman,” Major Austin commented. “Horace!” his wife remonstrated. “You’re not in the army now!” His ideas, though, did have some influence, so that gradually a group of supporters – mainly younger men and women – grew up around him. The things they perpetrated were beyond belief. Sexual, drunken orgies and obscenities of all kinds. The boy who’d got religion attended a number of Foster’s parties to try to exert his influence to stop them, and reported on all the disgusting details. He asked the congregation to pray for their lost brothers and sisters, and for him too, for all the humiliations he had to endure at Foster’s house. The congregation’s prayers had a positive outcome: the boy was miraculously cured of acne. And then, insult of insults, this man who’d so impractically preached on the evils of money suddenly received a great deal of it, from the publication of the first book written under his own name. The novel was outrageous and had an immediate success with the non-discriminating public, which lapped up any kind of perversion. Well, the critics hailed the novel too, and the following year it was put on the Canadian literature course in a number of universities, but the members of St. Joseph’s didn’t know about that. Anyway, with the publication came money. Which, to everybody’s consternation, Foster spent, frivolously, on his riffraff friends. Nothing, by reliable reports, went into life insurance or pension plans, or into any kind of solid investment. Nothing was given to the Progressive Conservative Party. Foster didn’t even consider improving his house or putting in a swimming pool, which might have raised the tone of the neighbourhood. “No, they’d only have orgies in the pool then,” Constance Nightingale giggled, to everybody’s surprise. BUT THEN, THANK God, Foster went away altogether. The rector had at last found the courage to ask him to stop coming to church. Foster understood immediately. “I won’t give you any more trouble, Hugh,” he promised. The rector grinned. “I’ve never had such an exciting time. Between you and I” – Foster interrupted to correct his grammar – “I get pretty fed up with the triviality of some of them, as you said once.” “Pissed off were the words I used.” “Well, er, pissed off, then.” “I’m going away for a while anyway. I’ll be back, though – you won’t get rid of me entirely.” The business over, they passed on to other topics, as two friends. But since both were, in different ways, religious, it’s not surprising that religion was a central topic in their conversation. And somehow they started to discuss the nature of heaven. “Do you want to know what my dream of heaven is?” Foster asked. The rector nodded. “I dream first of a cottage, in a clearing in the woods, by a broad river with a sandy beach. With sunlight, not too hot, and no mosquitoes or thorns or things like that.” He gave a smile: “Because I like woods and rivers, but not the mosquitoes. Or perhaps there’d be mosquitoes; only, they wouldn’t really bite, or the thorns wouldn’t really prick: rather, they’d produce brief scratches of almost unendurable pleasure, just so you’d know that everything was real, more real than this world which surrounds us. And the rest of heaven would be an infinity of beautiful places to explore and discover. Forests, mountains, snow, sun, beautiful cities, a land where all could have that simple joy they most desire. Man would have access to God’s omnipotence and omniscience: to learn and, in the fullness of eternity, to discover the secrets of the universe. He’d be able to travel at will within it – oh, to see and comprehend it all! – but always to return to his spot in heaven to gaze on its beauty and know, know of the rest of the infinite beauty round about. “There’d be libraries, institutions of truly higher learning, art galleries, concert halls: for who can conceive of heaven without art and music? Man would perform, and create, as he does on earth. All that’s best of man would be there. And all that’s worst, for man must not be ignorant, and he needs to know the bad too. So there’d be museums of horror, vice, pettiness. Of course there’d be no war, no conflict except for earnest dispute, no sickness, no politicians. Not even any social scientists, thank God. Doctors, I suppose, yes – but to study the inner physical workings of man. “And then, most important of all, we’d be resurrected in our young, healthy bodies instead of our old, ailing ones. Death would be an awakening out of sleep into the reality of life. Or perhaps we wouldn’t even notice death: life would be forgotten like a dream it’s not worth making the effort to remember. In our wonderfully real bodies our appetites and desires would still exist, for food, sex and all the pleasures of life. Only now it would be with all the vigour and passion of first youth. Love-making would be there in its most voluptuous, most erotic and most spiritual form, for now there’d be no jealousy, no fear of being displaced in another’s affections. One would meet again, know and explore – completely, carnally – all one’s old loves as well as those one never had time or opportunity to know on earth. Whenever one wished, one would know where one’s loves were, whom they’d be with; one would rejoice that all are joined in a common love of God, who uniteth all things, in whom is the sacred and profane, the humorous and the serious, the joy and the suffering, the beginning and the end.” He paused. “But there would be solitude as well, for man has need of solitude to create.” “The God you speak of isn’t the Christian God.” “Not that of the Christian church at any rate,” Foster said sadly. AND SO FOSTER went away, and life became more peaceful for the members of St. Joseph’s. He would return again after many years, but in the interval, with things in the parish back to a normal observance of religious proprieties, he became a mere conversation piece, to be remembered even with nostalgia. How could a man behave in such an extraordinary way or hold such disturbing ideas? Was he really in league with the devil? There was, of course, an explanation. The people of St. Joseph’s didn’t know everything and would have been surprised to learn that before it had all started Foster had quite seriously considered making such a pact: to that extent the later rumours had some validity. The problem was that he didn’t know how to go about it. He was a highly intelligent man and didn’t for a moment believe the devil would appear before him, horns, tail and all, or in any of the traditional forms. But he’d studied the devil as a literary figure and recognized him as a valid symbol of man’s aspirations for knowledge and experience: in revolt against a God who, in the thoughts of some, would prefer man to remain innocent and ignorant of evil. It was knowledge and experience that Foster wanted. He was already a scholar of no small reputation; he’d published his novels and painted pictures which hung in a few of his relatives’ living-rooms. But he knew that his achievements were minor. His scholarship was sound but inessential; his novels had been published under another name because they were trivial; and his pictures ... well, what was wrong with them was precisely that they could be put up on his relatives’ walls, alongside pictures of forest streams, lakes, mountains or sentimental women and children which had been bought at Zeller’s. “He’s artistic,” one of his aunts would tell her friends, unaware that the word was used of people who produced flower arrangements or suchlike with no comprehension of what art was all about. This wasn’t what he wanted, and he was miserable. Unable to endure the high-minded snobbery of his colleagues or the inanities which were the daily life of the members of St. Joseph’s – and being uninterested in the fact that his neighbour’s three-year-old was now toilet-trained (which everyone else seemed to regard as the most important piece of news since the day it became known that old Mr. Krapowski was no longer toilet-trained) – he was isolated from others, lonely. Which wouldn’t have been so bad had he not craved some intimate contact beyond the superficial level, while at the same time being tormented by simple sexual desire. The two were linked. A great deal of “experience” meant for him sexual experience, for he was well aware that this was one way of coming close to another human being without the meaningless exchange of information which takes place in other social situations. For sexual experience understood in such a way, masturbation was a poor substitute, and in any case seemed somewhat ridiculous in a fifty-year-old man. And so he thought of a pact with the devil. Not with smoke and magic circles and incantations, not at first. Foster, although he knew a lot about witchcraft in a literary sense, didn’t take it seriously. No, he realized he needed to change his life, to rid himself of his old inhibitions and attitudes, and he saw a pact with the devil as a symbolical representation of that change. But how was he to go about it? Even if the devil were only a symbol, he had to make it into one that was real for him. So finally he decided to devote himself to the mumbo-jumbo of magic – not because he thought spirits would arise before him but in order to convince himself of what he was doing. For this he had to study many obscure works, whose authors in some cases might be simply charlatans. He joined a group of devil-worshippers, whose practices he found grotesque and ridiculous. But he put up with it, feeling in his soul he was an outsider, although the others welcomed him as a convert and took it for granted he shared their beliefs in the same way as the members of St. Joseph’s took it for granted he shared theirs. The night came for his first practical experiment in summoning the devil. He’d removed the carpet and most of the furniture from his living-room, leaving only a couch and chair, and now he brought in the other things he needed: candles, candlesticks, chalk. It didn’t take him long, and he then lay back on the couch and went to sleep. He didn’t know what time it was when he awoke. He lit the candles, placed the candlesticks in their pre-assigned positions on the floor, and started to draw on it with chalk, beginning various incantations as he did so. “Bloody fool,” he thought, “what good will all this do?” He worked eagerly, though, enjoying it. The procedure was complicated, involving some foul-smelling liquid he had to prepare. He couldn’t remember everything, but was sceptical enough not to think it mattered. Finally, he came to the words that were meant to summon the evil one. “Venez, venez, seigneur, venez!” he pronounced, wondering why the devil should respond more readily to French than to English, and whether it made any difference if the French were Parisian, Old Norman or Québécois. Nothing happened. Of course. But to make sure, he repeated the French in different dialects, then checked his chalk figures and found he’d made a mistake. So he got down on his knees to correct it, murmuring further incantations, but sticking in a few swear words because he was annoyed at himself for being so ridiculous. “What the devil are you doing there on your knees, you stupid runt?” came a voice from behind him. Startled, he turned round, put out his hand, and stared at the stranger. “Say, then, who art thou? . . .” “Oh cut out all that crap,” the other interrupted. “You don’t really believe in it, do you?” Foster hesitated. “No, of course I don’t.” “Good. Then turn on the light and come and sit down on the couch like a human being.” Foster did so, looking at the guest, who was a shortish old man of about eighty, with long hair that merged into a beard, and dressed in a white robe. A bit like Karl Marx in a night-gown. “How did you get in?” he asked. “Through the door, you idiot, how else?” The man was looking around with an expression of distaste. “Pretty sparse place you have here. Why don’t you get some decent furniture? Make it comfortable for your guests. And what’s that revolting smell? Oh, that liquid over there. Pour it down the drain, for God’s sake.” Foster did as he was told, then returned and sat down on the chair opposite. As he looked at him, the old man’s rather irritable appearance softened: he was still stern but kindly too, trustworthy. Strength and knowledge was there, sadness and humour. No longer like Karl Marx now that the irritation was gone. Younger perhaps. Or older. Not really how he’d conceived of the devil at all. “But then I’m not the devil,” the man said. “You should be ashamed of yourself believing in that nonsense.” “Only as a symbol,” Foster justified himself. “Oh, as a symbol I’ll grant you he serves a purpose. But he’s one-sided. Much as your church God is one-sided too.” “Who are you then?” “Come, Foster, you know who I am.” Foster was embarrassed. “God?” he ventured. “The trouble with that word,” the old man said, “is that people misunderstand it. They think of me as the God they’ve created in their image. The half-potent God, able to do only what their limited minds think of as good rather than evil. Let’s call me something else, shall we? To prevent confusion. What would you suggest?” Foster thought. “Yahweh,” he said. “That will do splendidly. Sufficiently pre-Christian. Close enough to paganism without entirely suggesting my sole purpose is to strike people with thunderbolts. I like it.” “Is that your usual appearance?” Foster asked with curiosity. “I’ve no appearance, you dunderhead,” Yahweh said, getting irritable again. “I merely chose the form I thought you’d most appreciate.” Suddenly he let out a roar of laughter. “And you’ve got to admit it’s better than those Santa Clauses or sickly sweet pictures of Christ they’re fond of putting in children’s books and on church walls.” He became businesslike. “Now, tell me why in the name of thunder were you trying to call up the devil?” “To make a pact with him.” “To renounce God, to get the devil to serve you for twenty-four years, and in exchange to give him your soul for eternity, I suppose? I must tell him that the next time I see him, he’ll die laughing. Between you and me he’s getting a bit sick of all these pacts.” Foster was puzzled. “But who is the devil then?” “Didn’t I tell you? I am.” “You said you weren’t.” “I’m not.” “How can you be and not be?” Yahweh laughed. “I’m omnipotent, that’s all. I am. And that includes I’m not. Don’t worry about it. You’re making human categorizations.” “And then you’re God too?” “That’s right.” “And Christ?” “Me too.” “And Buddha, and Mohammed, and . . .” “Oh, do stop going on and on! It’s all the same anyway, what difference does it make? I am, and am not, all of them.” Foster was sarcastic. “Is there anyone else that you’re not? Or that you are?” “Yes,” said the other. “Or rather no. I’m you too, and not either, or hadn’t you noticed? Or at least you when you’re not pretending to be someone else.” “This isn’t getting us anywhere,” Foster said gloomily. “Sure it is,” Yahweh exploded, “if you try understanding rather than just thinking! You disappoint me, I expected more from you. But tell me, why did you try to summon up the devil rather than me? When I’m the source of power, wasn’t that rather stupid? What can he do that I can’t?” “Well, I guess I thought what I wanted was evil. No, that’s to say, I didn’t think it was evil, but that God would. And that therefore God couldn’t grant me my desires.” “That’s simple-minded theology, Foster. You’re confusing me with your church God again. I’m omnipotent, not half-potent, I tell you,” he suddenly roared. “I can give you anything you want. And what’s more, I’m the only one who can.” “But will you?” “Of course,” Yahweh said happily. “Love to. Provided you tell me your real desires.” “And the conditions?” “Not important. All this stuff about selling souls: your soul will be mine anyway.” He intoned flatly: “Was in the beginning, is now, and ever shall be, world without end, amen!” Businesslike: “Now, let’s make a list of what you want.” Foster, at least, had his list prepared. “Fame,” he said, “so that people will love me. Riches, so that I may travel, live riotously, and have the means to acquire knowledge and be independent. Power, to get people to do what I want. Creativity, to do something of what you can do – no, I don’t want omnipotence, it won’t be fun if it’s not difficult. Joy and suffering, because one can’t create without them. And immortality.” “Hold on a minute, can’t you? I may be omnipotent but I can’t write that fast. Let’s sort these out a bit. How about we just put down love instead of fame? That’s what you really want, isn’t it? – and although you forgot to mention it, you want to love others too. We’ll throw in a bit of fame along the way, but deep down you know that it’s not the important thing. Can we cross out riches? You’ll get money now and again, but what you want is travel, independence, experience, knowledge, and riotous living – by that you mean sex, I suppose.” “Yes. – You see, I think that sex is the greatest form of human communication …” “Oh be quiet! Of course it is. Don’t start explaining the world to me! Okay. No problem, at least for your lifetime: the generation after you will have to be far more careful than you will, because of a devilish virus getting loose somewhere. Now, power: you don’t really want to boss people around and feel important, like those tedious prime ministers of yours who’ll also be inflicted on society in a few years’ time? No, you want love again, to be an influence for the better in the world and, for all you give the impression of thinking only of yourself, the satisfaction of doing something positive for your fellow men. Right? Creativity, joy, suffering – that’s all excellent. Wise man not to ask for happiness, the sop of those who want to live like robots. Immortality you have already. Now what can we add? Knowledge we have, but how about a bit of wisdom? And courage in being yourself. You’d better keep a few vices too: the people of St. Joseph’s will be happier if they have something to hate you for. So keep your arrogance, your lack of courtesy. We’ll add on a solid dose of vulgarity too, and outspokenness. Let’s stir things up a bit. Now, is that the lot?” Foster was delighted. “More than I expected.” “Fine. One thing: you’ll keep your loneliness, and an inner emptiness which can only be filled at the time of your final union with me. On earth one can’t have wisdom without it.” “And when will I die? Do you want me to sign a pact?” “You and your confounded pacts. Of course not. You have my word. And I am the word. I suppose you want me to give you twenty-four years of life too? It doesn’t make much difference when life’s eternal anyway. Just let me know when you feel like a change.” “I never imagined you could give me all that,” Foster said. “I mean, the church is so set against a lot of it.” “The church has a sin of its own,” Yahweh said, not without sadness. “It’s called respectability, which is a form of fear. And you thought I should be a respectable God. Me, Yahweh! Ha! The sole source of power. The creator of all things. The beginning and the end. Alpha and Omega. Me, who designed man to be Lord of the opposites, as one of your German writers so aptly put it. You recognize the quotation, I hope?” Foster nodded. “And no doubt,” Yahweh said, getting up to go, “you’d like a little bit of skirt to spend tomorrow night with? Intelligent, beautiful and sexy, right?” Foster by now was more courageous. “Yes, and she should be . . .” “Spare me the gruesome details, please. I know your tastes. I gave you them, remember?” Foster looked at him and laughed. “You old bugger, you!” he said slyly. Yahweh burst out laughing again. “That’s the spirit! Never be afraid of me. Tomorrow Mrs. Wignall will be sick. Get rid of her. You’ll find a better applicant for the job.” “I don’t know how to thank you.” “Oh, say the Magnificat a hundred times or something. Live, damn They shook hands, and Foster found himself lying back on the couch again with the lights out. When he awoke it was morning. After breakfast the phone rang. He told Mrs. Wignall he was sorry she was sick but that he wouldn’t need her again. At the office of *The Wittenberg Torch*, where he went to place an ad for domestic help, an attractive foreign girl next to him asked if she could have the job. He agreed, and they arranged for her to come and settle the details that evening. She was even more attractive without clothes on, and the details were settled in bed. And so began the time of riotous living the people of St. Joseph’s found so outrageous. But there was a deeper side to it, of which they were unaware. There was pleasure, yes, but combined with it went an overwhelming sense of gratitude towards Yahweh. Foster was in awe at the enormity of the gift he’d received, the living manifestation of which was this marvelous girl, whose sexual inventiveness made his own fantasies seem as limited as those of a boy before puberty. It was the awakening of first love all over again. Unknown to anyone else, it was Margrit who initiated Foster into the orgy, on secret weekends when she’d take him to uninhibited places of hedonism. The affair ended after she got pregnant. It was she who insisted on leaving. “Are you one to be bound by the ties of fatherhood and family life?” she asked him. He admitted she was right, remembering what Yahweh had said about loneliness – although he would still see her, and his son, from time to time. So he took his new mistresses at St. Joseph’s, and then gradually found a group of friends growing around him, so that the orgies now took place at his own house. Qualitatively, they were different from other groups of swingers popular in those years. Theirs was a very close society, which shared solid intellectual and artistic interests as well. The people of St. Joseph’s saw only the immorality, but knew nothing of the discussions, the music and literary evenings, the amateur theatricals (some of which included sexual acts, performed with taste and love). But there were the drunken Dionysian revels too: life in this group was far from idyllic, for the idyllic is one-sided. Rather, it was often bestial, the participants lusting vulgarly after those who, shortly before, had been the recipients of tenderness, love and respect. Crude fellatio and cunnilingus were then the norm, for is not the wet and slobbering sucking of one’s partner’s genitals, held with legs apart for all to see, the very epitome of earthy, animal sex, compared with which tender, blushing intercourse is ridiculously genteel and polite? For here both barbarism and civilization reigned together in a harmony of opposites. Foster individually adored his partners and was adored by them. Some, of course, were jealous or possessive and suffered from his refusal to bind himself exclusively to any one, but in this suffering they experienced an essential part of humanity. He too, if they’d known, had to struggle with the same self-doubt, for he too was human, and as the group grew he was aware of the competition of other, younger, men. He suffered from his own human imperfection, and gave thanks for it. The orgy of the senses carried over into his painting and writing. He would regurgitate onto canvas in the morning visions which were still coursing through him from the night before, his tubes ejaculating paint, his hands palpitating, kneading the forms before him; and then, in repletion, he would paint a watercolour of utter tranquility, working patiently on the finest detail, inspired, one would say, by the peace of God which passeth understanding. It was the same with his writing. In his passionate outbursts he had no time for anything but a tape-recorder; then he would patiently transcribe in longhand, and work for days correcting and shaping. What he produced was both violent and eternally still, blasphemous and deeply religious, sensuous and spiritual. The members of St. Joseph’s found it outrageous, the critics were divided over it, but it sold: on the one hand, to those who saw, or read, and immediately understood and loved the genius behind it; on the other, to those who craved cheap sensationalism. And so Foster earned money, until the day came when he left Wittenberg. His life, since his dream about Yahweh, had been full of action. He’d had no time to consider whether he was happy or not, which didn’t matter, and very little for calm, lonely reflection, which did. He went to a tiny village in Austria, where he lived unostentatiously, with none of the uproar which had surrounded him at home. The members of St. Joseph’s would hardly find it credible that he went each morning, except Sundays, to the ornate baroque church and spent up to an hour in mute contemplation. “Are you repenting for your past sins?” a village girl asked him one morning. “No!” he said emphatically. “I’m taking time to savour my life. To rest my soul.” She laughed. “That’s too complicated for us here.” The girl became his mistress, and they lived together for over a year. They would walk in the mountains, breathe the air, look down at the villages and up into the heavens. They would make love in the meadows, expose themselves naked to the goats and the cows, who looked on indifferently, chewing and producing their milk. They would laugh, and cry too. About once a month Foster would leave the village for a day or so in the brothels in Munich. “Why do you go?” the girl asked him. “I’m not enough for you?” He had to think how to explain it to her. “The animal principle,” he said at last. “With you it’s become a beautiful dream, emotion, purity. The sensual has become spiritual, and very lovely it is too. But that alone is inhuman. Humans are just as full of lust and passion, of animalism. Of sordid, exciting desires. The spiritual must become sensual again. Sex, pure animal sex, has to have its due.” “Is life no more than sex, then?” “Much more. It includes all that can be appreciated when the urge for sex is stilled. Yet in another sense life is sex. Sex creates life, in every way. It’s the passion to live. Without it there’s colourless self-denial, only angels and harps. The cows producing their milk.” He paused. “But sex is death too, for all of life is a process of dying. Is not each orgasm a small death?” She couldn’t understand him, perhaps inevitably: no more than anyone really understood him. He was condemned to be alone. But in the meanwhile life was there before him, even if he often felt it wasn’t quite real. How much less real, though, was the sedentary family life of many of those around him. For the first time his feeling of sorrow for them outweighed his more usual contemptuous indifference. He expressed his sadness in another book, and then he travelled on to other cities, leaving the girl behind. She was a happy memory, part of the fabric of his life but only one of the cross-threads, essential for the pattern but not running from end to end. And equally, he was only a cross-thread in her life, which was woven in another direction, with the threads from that stretching away into other fabrics. Life in its entirety was a multidimensional construct of different tapestries, some bright and coherent, some irretrievably tangled, some consisting of nothing but a few twisted threads, some torn off and broken. In Italy a cross-thread was broken for him, painfully. He was in the south and had circumvented local prejudice sufficiently to attract a dark-haired innocent nineteen-year-old. Unfortunately, the son of a family friend considered himself betrothed to her and, following custom in such matters, burst into Foster’s hotel room with a machine gun and sprayed the bed with bullets, killing the girl. Foster, ludicrously, was getting rid of a used condom in the bathroom, or he’d have been killed too. In a moment he was back in the room, where the boy was weeping over the girl’s naked, bloodstained body. He offered no resistance when Foster took the gun. They stood and looked at each other, blind convention staring in hatred at its insolent challenger. In the boy’s look was all the fury of the man who knew he was right, had justice and honour on his side. Society itself, even the law, would support him and give only light punishment. Foster hesitated, shocked by his responsibility for this death, caused by his defiance of convention. Did it matter that the convention was evil? Should one simply submit? He looked at the girl’s body oozing red and ugly. Why hadn’t Yahweh forewarned him of this? This was bestial too, a thousand times more so than any of his orgies where the senses ran riot. He was horrified and yet fascinated. This too was life, the very horror was part of it. He couldn’t bring himself to pull the trigger as the boy left. He sat in silent respect for the girl until the police came, and there followed the interminable inquiries and formalities. The neat documentation by the living of the incomprehensible fact of death: unable to understand it, they got rid of it by giving it a certificate, as though granting a passport for travel to a foreign country. We will not follow further Foster’s travels, for his life was such that it would be possible to give only a superficial view of it. It could be made into an adventure story, with stirring deeds and times when Foster feared for his life, but the adventures of his soul would be lost. It could be made into a morality tale, for Foster performed good deeds to help others, but he would prefer them to go unrecorded. It could be made into a pornographic story, for sometimes the revellings continued, but in Foster’s world pornography had no meaning. A love story, a story of violence: all this it could be, for Foster, thriving on life, thrived on opposites. At long last he returned to Wittenberg. “He’s coming back, have you heard?” the whispers went round St. Joseph’s. Now, in eighteen years the parish had changed, for the children had grown up. There was a certain antipathy between the old-timers (represented by Major Austin, now churchwarden, and old Miss Nightingale, honorary president of the altar guild) and the under-forties, who felt the world was passing St. Joseph’s by. Their spokesman was none other than the man who’d once got religion. In eighteen years he’d married and raised six children, and turned into an extraordinarily liberal personality. With the arrival of Foster, the old-timers considered it their duty to warn everyone of the danger, while the under-forties tended to laugh and think the older ones had probably misjudged Foster. There was tension before anyone had even seen the man. The rector, always well-meaning, tried to reconcile the two sides, pointing out that Foster had become sufficiently well-known as a painter and writer to bring Wittenberg some fame. “We’ll have a great man in the congregation, even if he’s as difficult as some people say. But he could have changed. And think of the example St. Joseph’s could give the world.” Let’s welcome him, show the power of the church working with such a man.” The rector was getting carried away by now: “How magnificent if we at St. Joseph’s could give back to the church a true, repentant sinner!” The man who’d once got religion shook his head. But others allowed themselves to be convinced, willing at first to show Christian forgiveness and accept their prodigal son with open arms. If only Foster had been a repentant, prodigal son! Instead, he ignored them. Turned down their generous invitation to become a sidesman. Didn’t come to church, even though the rector went to see him and came away hours later after a very friendly chat. It was all the more galling because various celebrities started to visit Foster to pay their respects. Writers, artists, scholars. Well, the people of St. Joseph’s didn’t know everything, but they certainly knew the glamorous movie star who visited him. But did Foster let his friends, and this actress in particular, meet members of the congregation, or bring them to public functions where they could give a few autographs to the children? Of course not. St. Joseph’s, justly, felt slighted. “I suppose he’s having an affair with her,” old Miss Nightingale said with prim satisfaction. It became known that this indeed was the case. And when a few of Foster’s former devotees started to return and the odd orgy took place once more, general indignation broke out again. “He’s still in the service of the devil,” Connie whispered excitedly. “Perhaps he’s the devil himself.” The others had forgotten this rumour, and the man who’d once got religion, remembering how he’d started it, looked embarrassed. But Connie Nightingale had become stubborn in her old age and went around repeating the same thing to everyone, with picturesque details – remembered from eighteen years before – of everything that supposedly went on now. She seemed particularly incensed that all the celebrities came to pay homage. “Can’t understand it,” wheezed her ally, Major Austin. “In my day famous people had more sense.” “Now don’t get upset about it, Horace,” his wife commanded. “It’s bad for your asthma.” The orgies, in fact, were nothing in comparison with the old days. Foster had mellowed. Everything was more discreet, less antagonistically obtrusive. Foster was now in his seventies, and looked it: worn out, Connie said, by a life of excess (although she herself was younger and looked worse). In this she was, quite frankly, mistaken, for Foster was in excellent health. But he’d had a good life, and the excesses no longer seemed as necessary as before. For the most part, except for the occasional encounter in bed with some attractive woman, he preferred just to write or paint quietly, with less élan. His works no longer had the youthful brilliance but instead a calm maturity, so that they were prized by literary and artistic connoisseurs but no longer appeared on the best-seller lists. But the old-timers of St. Joseph’s didn’t know everything, and they tried, particularly Connie Nightingale, to make out that it was worse than before. “A servant of the devil, right here among us,” she said on one occasion, with an expression of diabolical cunning. “We can’t put up with that. We must do something.” The younger ones looked at her strangely, thinking that since Foster’s return she’d gone a bit dotty. “What have you in mind?” someone asked. Connie only smiled. The next day she went to call on Foster. According to what she told everyone afterwards, he invited her in, beat her, undressed her, tied her to a chair and raped her. “And then he just threw me out into the street,” she concluded. The last was probably true, but no one at St. Joseph’s believed that even a man like Foster would rape Connie Nightingale, and the under-forties thought the whole thing hilarious. She’d apparently expected them all to go to Foster’s house and tar and feather him, but when nothing happened she visited him again. This time, she reported, he did even worse things. More laughter, even from old-timers. Soon people began to get used to the sight of her toddling off to Foster’s house, although he’d learnt not to answer the door if he could help it. Now, although this was clearly her fault, some of the old-timers found in it a reason to blame Foster. “It’s witchcraft!” Major Austin exploded. “Seen something of it in Africa, you know. The woman’s infatuated with him.” As she got stranger and stranger, others began repeating Major Austin’s comment and suggesting that here was the traditional case of the devil turning a decent woman into a witch. Thus, Foster once again found himself cast in the role of the devil. Well, actually, the under-forties rather enjoyed having a devil in their midst. And to be frank, the old-timers enjoyed it, too, for here was someone they could legitimately hate, a scapegoat who could be blamed for everything that was wrong. When Connie Nightingale was taken off, screaming, to a hospital from which she never returned, there was gleeful talk of its being demonic possession, with Foster the instrument of her undoing. “The whole world’s going to the devil,” Major Austin complained. “If I just had him in the army! What’s happening nowadays?” “The world’s changing,” his wife said. “It’ll never be the same.” “Thank God too,” said the rector, alienating them both. What suddenly united St. Joseph’s and turned young and old against Foster was the publication of his last book, in which it was apparent that the characters were drawn from members of the congregation and from the faculty of the university. The book was an extraordinary apocalyptic kind of thing, in which they were all shown, neither as welcomed into heaven nor thrown into hell but as condemned to return to earth. Now, the more sensible commentators pointed out that the characters were created with sympathy, this was no longer the old Foster, who’d looked down on everyone, but a man of understanding who regarded with genuine pity those forced, for whatever reason, to live out their half-lives in the shadow of St. Joseph’s or the university. But the faculty members, considering themselves intellectuals, were incensed. The atheists thundered against Foster’s naive religiosity. And as for St. Joseph’s – well, the congregation couldn’t abide his pity. Even the more moderate members (who, in the prime of their middle-class upward mobility, had been treated more harshly in the book than the old-timers) sided with the conservatives. Except for the rector and the man who’d once got religion, they all began to hate Foster. “Impossible man!” they said. “We’ve got to get rid of him!” “Devilry! Witchcraft!” Major Austin shouted. “Remember Connie Nightingale?” “How do we get rid of him?” the others asked him. “I’ll tell you! We must … I think we should … oh hell, hang him on a post and bang nails into him!” “Do you really mean that?” the rector said severely. For once Major Austin looked sheepish. “No, of course not.” The next day came the news of Connie Nightingale’s death. Most would have done no more than shrug if it hadn’t been for Foster. He shrugged, too, when he was told the news. Said she’d been dead for most of her life anyway. Said terrible things, showed no respect: it was heartless, when he’d been responsible for it all and turned her into a witch. A vile man. An odious man. “No, no, no,” said the man who’d once got religion. “He was only being honest, don’t you see?” They didn’t. But the problem solved itself. Foster was sublimely indifferent to the arguments going on around him when he set out that Friday afternoon for Toronto in a blissful mood. He’d finished a painting and, unlike those times when he’d been restless at the thought that he mightn’t get new inspiration, he felt there was no need to paint or write anymore. He had to take the painting to Toronto, however; he’d promised it to a colleague. It was one of those days in midsummer when warm sun alternated with violent thunderstorms, and there’d been a shower just before he set out. He enjoyed driving, since it brought a feeling of peace and an opportunity to think. As he reached the 401 and turned onto it to head towards Toronto, he realized it was twenty-two years since the dream. “Another two years to put up with St. Joseph’s,” he thought, “to make twenty-four. But what was it He said? As long as you like, just let me know when you want a change. Do I want a change?” He didn’t notice as the car left the road, hit the bridge abutment and plunged down a hillside into a small river. But there was no feeling of surprise when he found himself walking along its bank. The car was further back, he supposed, but he didn’t turn round, because all that was a mere dream he’d left behind. The river, broader now, stretched on enticingly round a bend; it was exciting, needing exploration. And now the sun was fully out, a sun which warmed him pleasantly and which he was tempted to take down in his hands to find out what it was made of. He was naked and his body younger, firmer, full of life and power. Nothing like his body of ... how long ago was it? He walked on, eagerly, to the bend in the river. Forests on either side. Trees vibrating with life, animals he could sense amongst them. How magnificent to be alive. What was it all about? He didn’t know, but he would find out. Round the bend in the river was a group of young girls, all naked, too, splashing and playing in the clear water, laughing as he approached. He waved to them as he walked by, feeling strength in his loins and a powerful desire for them all. Who first? he wondered idly. A gleam came into his eye as he realized that an enticingly rejuvenated Constance Nightingale was amongst them. My God! what a figure she had, and yet in his long sleep she’d seemed so dreary. Or so he supposed, for he couldn’t really remember. He glanced again at the women, passed on, and his desire for them subsided. There was time enough for all of them, and for all the other wonderful things he wanted to do. He recalled something about books he’d written and wondered if they were in the library here. Probably, but why bother with them? There was a universe to explore. But plenty of time. Without looking back, he strode up the hill – oh, how pleasurably the mosquitoes bit and the thorns scratched! – towards his cottage in the woods. “Hello, you old bugger!” he said as he opened the door, to the figure who awaited him. A. COLIN WRIGHT has published stories in various Canadian and British literary magazines such as Acclaim, Dalhousie Review, Descant, Event, Journal of Canadian Fiction, NeWest Review, New Quarterly, Quarry, Storyteller Magazine, Waves and Stand Magazine. Originally from England, he is a graduate of Cambridge University in Modern Languages, which have remained as a major interest. As professor (now emeritus) of Russian Studies at Queen’s University, he has published numerous articles on Russian and comparative literature, as well as a major book on the novelist and playwright Mikhail Bulgakov. He has just returned from his thirteenth visit to Russia. Of his writing he would say that he “writes pretty much anything except poetry” – including several novels, which are still seeking a publisher. He is now mainly involved, however, with the theatre, having written six plays so far. He was 1993 winner in the special merit category of Theatre BC’s National Playwriting Competition with his stage adaptation of Iu. Tynianov’s novella *Lieutenant Kijé*, which was subsequently performed at Theatre 5 in Kingston and is now available online from International Readers’ Theatre (Blizzard Press). He was also a winner in the Ottawa Little Theatre One-Act Playwriting Competition with his *George’s Funeral*. Locally, he has recently directed *Shadowlands* and *The Importance of Being Earnest* for Kingston’s Domino Theatre, and has played the Troll King in *Peer Gynt*, Malvolio in *Twelfth Night* and Father Jack in *Dancing at Lughnasa*.
THE UNIVERSITY OF CALGARY Investigating Computers in the Classroom: Focusing on Transformative Pedagogy Through Global Learning Networks by Beverly Lynn Mathison A THESIS SUBMITTED TO THE FACULTY OF GRADUATE STUDIES IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF ARTS FACULTY OF EDUCATION CALGARY, ALBERTA DECEMBER, 1999 ©Beverly Mathison, 1999 The author has granted a non-exclusive licence allowing the National Library of Canada to reproduce, loan, distribute or sell copies of this thesis in microform, paper or electronic formats. L’auteur a accordé une licence non exclusive permettant à la Bibliothèque nationale du Canada de reproduire, prêter, distribuer ou vendre des copies de cette thèse sous la forme de microfiche/film, de reproduction sur papier ou sur format électronique. The author retains ownership of the copyright in this thesis. Neither the thesis nor substantial extracts from it may be printed or otherwise reproduced without the author’s permission. L’auteur conserve la propriété du droit d’auteur qui protège cette thèse. Ni la thèse ni des extraits substantiels de celle-ci ne doivent être imprimés ou autrement reproduits sans son autorisation. ABSTRACT This study focuses on computer technology in the elementary school. The purpose is to explore the potential of using computers as a tool to advance transformative pedagogy through Global Learning Networks while developing a better understanding of the benefits and drawbacks of computers in school in a general sense. Interviews were conducted in one Calgary school to determine the feasibility of using Global Learning Networks to explore global issues, deepen understanding of cultural diversity, and facilitate collaborative critical inquiry. Respondents expressed high interest in undertaking such a project; however, due to heavy teaching responsibilities, time was presented as a formidable obstacle. Discussions also centred on perceptions of computers in the "real world" of the classroom. Participants shared their viewpoints about what is working, what is not, what the perceived benefits and drawbacks are, and what could or should be changed. Since computer technology has attached with it a number of related issues, it was necessary to devote a large portion of this thesis to library research. This entailed a search not only of the positive and negative aspects of computers, but also of the broader cultural context within which these devices are embedded. A variety of issues were explored, such as business interests in education, the political influences that impinge upon our education system, and the world view that forms the basis of our technological society. ACKNOWLEDGMENTS I would like to express my appreciation to Mathew Zachariah for the guidance and advice given to me in preparation of this thesis. His tolerance and sense of humour helped to make this process much lighter. I would also like to thank all those who participated in the empirical study. Their insights, collegiality, and humourous anecdotes gave depth and breadth to this work. Last but certainly not least I thank my family, who endured my absence of mind throughout this process. In particular I thank my long-suffering, devoted husband, Ed. His constant encouragement and moral support pulled me through the darkest moments. I also thank Melissa, my constant bright and shining light, along with Larry, Ashley, Ervin, Shannon, my mom, and Sneakers, who were always there for me. I would also like to acknowledge the lasting influence of my father, George Yaremko, whose wisdom and spirit continue to inform my life. Never was a family better. # TABLE OF CONTENTS | Section | Page | |------------------------------------------------------------------------|------| | Abstract | iii | | Acknowledgments | iv | | Table of Contents | v | | **INTRODUCTION** | 1 | | The Investigative Approach | 2 | | Library Research | 2 | | Empirical Research | 3 | | Basis for Interviews | 4 | | Data Analysis and Interpretation | 5 | | Structure of the Thesis | 6 | | **CHAPTER ONE** | 8 | | BACKGROUND: CULTURAL CONTEXT | | | Adapting to Change: Social Milieu | 8 | | Global Milieu | 8 | | Education Milieu | 9 | | A Brief History of Time | 10 | | How Did We Get Here? | 11 | | In the Beginning... | 11 | | ...And in the Present | 13 | | Cultural Diversity | 14 | | Economic/Political Climate | 14 | | Existential Realities | 15 | | Prevailing Pedagogies | 16 | | Traditional Pedagogy | 16 | | Progressive Pedagogy | 18 | | Transformative Pedagogy | 20 | | Global Education: Seeking a Preliminary Definition | 21 | | A Final Commentary: How "We" Fit With "Them" | 24 | | Adapting to Change: Alberta's Response | 27 | | Alberta Education: Information and Communication Technology | 28 | | Background | 29 | | Underlying Principles | 29 | | Framework Overview | 30 | | Calgary Board of Education: Quality Learning Document | 30 | | Section | Page | |------------------------------------------------------------------------|------| | Computer Use: Benefits and Uses | 33 | | Motivation and Cooperation | 34 | | Cognition | 36 | | Computer Software | 42 | | Skills Development | 42 | | Interactive Technologies: Multimedia, Hypertext, and the Internet | 46 | | The Internet in Education | 50 | | Computer Misuse and Misgivings | 53 | | Child Development | 53 | | Principles of Brain Growth | 54 | | The Importance of Direct Experience | 56 | | Cognitive Questions | 60 | | Cognitive Development | 60 | | Software Challenges | 64 | | The Case for Word Processors | 68 | | Teacher Resistance | 69 | | Social Implications | 70 | | Computer Addiction | 71 | | Social Distancing | 72 | | Programmed Therapy | 75 | | Cultural Implications | 77 | | Preparation for the "Real World" | 79 | | Health Concerns | 80 | | Repetitive Strain Injury and Postural Complaints | 81 | | Visual Problems | 82 | | Radiation and Chemical Emissions | 84 | | Seizures | 86 | | Obesity | 86 | | Financial Costs | 87 | | The Information Age in a Global Context | 91 | | In-School Concerns | 91 | | Internet Concerns | 94 | | Final Comment | 96 | CHAPTER THREE EMPIRICAL STUDY: PERCEPTIONS OF COMPUTER TECHNOLOGY Background 98 Technology Learning Outcomes: Computers as Mandate 101 Specific Issues 109 Professional Development 109 Child Development and Values Education 116 Global Learning Networks - A Final Commentary 118 CHAPTER FOUR GLOBAL LEARNING NETWORKS: THE SUCCESS STORIES 121 Scenario One: Bosnian Refugee Camp 1993 122 Scenario Two: Second Language Acquisition - Maine and Quebec 123 Scenario Three: Confronting Prejudice - New York and San Francisco 125 Scenario Four: San Diego, Denver, and Puerto Rico: Parental Involvement 126 Scenario Five: Explorations in Folklore: Integrating Proverbs 127 Scenario Six: The Holocaust: Confronting Prejudice and Intolerance 129 Scenario Seven: Safe Drinking Water: Nicaragua 130 Scenario Eight: "The Contemporary": Long Island, New York 131 Global Learning: The Alberta Connection 133 CHAPTER FIVE DISCUSSION 135 Computer Use: Benefits and Drawbacks 136 Preamble 136 Research Discussion 137 Attitude Towards Learning 138 Cognition 140 Skills Based Software 141 Interactive Software 142 Health Concerns 143 Summary Comments 145 Global Learning Networks: A Case Study 146 Issues and Implications 150 Pedagogical Considerations and Contradictions 151 Philosophical Matters and Manoeuvrings 153 Business and Political Interests in Education 160 Concluding Remarks 167 REFERENCES APPENDICES Appendix I Interview Format Appendix II Notice of Consent Forms Appendix III Information and Communication Technology Document Appendix IV Quality Learning Document INTRODUCTION Technology, in its various guises, has increasingly come to the forefront of our society. Of these many forms of technology, one in particular has received a great deal of attention: computers. The emphasis on computer technology in society at large is reflected in schools and we are seeing mass infusion of electronic equipment promising to bring students in closer contact with the omnipresent Information Age. This thesis is a critical inquiry into the role of computers in elementary schools. The role of, and issues surrounding computer use in the classroom were investigated through library research, which revealed both the positive and negative aspects of computer technology. These issues were further explored through a series of interviews within a technologically well equipped elementary school within the Calgary Board of Education. Central to the role of computers in the elementary classroom, in this inquiry, is their use as a means for transformative pedagogy: if computers are going to be ubiquitous, are there ways they can assist transformative learning? This question was investigated through an analysis of the feasibility of introducing Global Learning Networks into the subject school. The study of the benefits and shortcomings of computer use in the elementary school points to a number of technical issues; the inquiry into the use of Global Learning Networks highlights a number of background pedagogical and philosophical issues that enframe computer technology in the classroom. An analysis of computers in elementary classrooms would not be complete without an analysis of both levels of understanding. The objectives of this thesis, then, are twofold: to examine the benefits and drawbacks of extensive computer use in schools, and to explore whether computer technology itself can be used, appropriately and in limited ways, to explore global issues, deepen understanding of cultural diversity, and facilitate the transformative dimension of education through collaborative critical inquiry in the elementary school. **THE INVESTIGATIVE APPROACH** As outlined earlier, this thesis is an interpretive study, exploratory in nature and not seeking generalizable findings. It is in part library research and in part empirical research. **Library Research** Due to the almost all-encompassing nature of the topic, which embodies computers, pedagogy, and world view, it was necessary to include an investigation into the outside influences underpinning and impinging upon computers in school. This required a rather extensive search, which has resulted in a somewhat lengthy discussion of the literature. The bulk of the library research centres around three main components: Computer Use and Benefits; Computer Misuse and Misgivings; and Global Learning Networks. Also included, in a minor way, are works focusing on business and politics in education and the influence of these forces on the infusion of computer technology and its application in schools. A summary of teaching foundations, based on documents released by Alberta Education ("Information and Communication Technology") and the Calgary Board of Education ("Quality Learning Document") is integrated into this discussion to provide a more complete description of the fundamental assumptions that inform current teaching practice in Alberta. Both of these documents have been downloaded from the Internet and are presented in their entirety in Appendix II (Information and Communication Technology) and Appendix III (Quality Learning). **Empirical Research** In order to answer the research question regarding the feasibility of incorporating Global Learning Networks into daily classroom life, a study was carried out with teachers and administrators in a Calgary public school. Perceptions were also sought regarding computer technology in general – what is working, what is not, what are the reasons, and what teachers can possibly do to effect changes where they see fit. A qualitative (as opposed to quantitative) approach was the most suitable in this case due to the open ended format of this portion of the inquiry. The propriety of qualitative research is provided by Marshall and Rossman (1995): "The qualitative approach to research is uniquely suited to uncovering the unexpected and exploring new avenues." (p. 28). Opinions are allowed, valued, and integral to analysis in qualitative studies; however, this is not the case with quantitative research. To distinguish qualitative from quantitative research, Wiersma (1995) provides the following distinction: “Qualitative research...follows the naturalist paradigm [so] that research should be conducted in the natural setting... [which allows] holistic interpretation...Quantitative research has its roots in positivism and is more closely associated with the scientific method than is qualitative research...The emphasis is on facts, relationships, and causes... Quantitative researchers place great value on outcomes and products...” (p. 12-13). This is furthermore made clear by Locke, Spirduso, and Silverman (1993), who remind us that in utilizing qualitative, as opposed to quantitative research, “…we are not seeking... a cause and effect model of reality.” (p. 99). **Basis For Interviews** Approval to proceed with interviews was received from the University of Calgary as well as the Calgary Board of Education. A specific interview format was designed (see Appendix I), but this was primarily used as a guide to direct the flow of conversation. The validity of permitting conversation over structured interviews is substantiated by Marshall and Rossman (1995): “…qualitative, in depth interviews are much more like conversations…the participant’s perspective on the phenomenon of interest should unfold as the participant views it, not as the researcher views it.” (p. 80). This was integral to the study because the beliefs and attitudes of the respondents emerged along with their opinions about how computers *are* used, how they *should* be used, and how they *could* be used. Data Analysis and Interpretation The interviews were recorded into a tape recorder (a copy of the consent form can be found in Appendix II) and conversations were later transcribed verbatim. Direct, anonymous quotes from participants are woven into the Empirical Study (Chapter Three) and the Discussion (Chapter Five) as they relate to the themes that arose through the conversations. Marshall and Rossman (1995) state that, "Qualitative data analysis is a search for general statements about relationships among categories of data." (p. 111). The comments that have been included from the interviewees are intended to present a balanced view, but it must be acknowledged that these form part of a personal interpretation. Wiersma (1995) reminds us that: "All in all, analysis in qualitative research is a process of successive approximations toward an accurate description and interpretation of the phenomenon. The emphasis is on describing the phenomenon in its context, and on that basis, interpreting the data." (p. 216). The interviews revealed underlying pedagogical beliefs, which have an implicit set of values, which in turn inform teaching practice. Despite directives in education (included amongst which are cautions against teaching values), teaching is not [yet] robotic. The teacher acts as a filter, consciously or unconsciously, and underlying values can be revealed not only through what is discussed in classroom but through what is omitted. STRUCTURE OF THE THESIS The first chapter provides background to our present education system in Calgary. It begins with a general commentary about our cultural milieu and the profound changes that we have experienced in the latter part of this century and that we are continuing to experience. This is followed with a brief history of schooling along with a few words about the purpose of public education. Embedded within this is the impact of politics and the business community on education. Much can be said about these latter two influences; however, each area is a thesis in itself and can only be dealt with superficially within the scope of this study. In order to tie this in with the planet as a whole, cursory mention is made of global education and what our role in Alberta could or should be in terms of moving our curriculums towards transformative pedagogy. The final section in this chapter contains a summary of the "Information and Communication Technology" document (Alberta Education), and the "Quality Learning Document" (Calgary Board of Education). Chapter Two presents a review of the literature surrounding computers. It includes a discussion of both the benefits of, and objections to computer technology. Chapter Three constitutes a summary of the empirical research. Interviews are condensed into major themes that arose from discussions. These conversations focused on computer technology as it is being used, and how it could be used. Thoughts are also included about perceptions of power and politics in education. In Chapter Four, an overview of the work of Cummins and Sayers (1995) is provided to demonstrate how Global Learning Networks can be used to promote transformative pedagogy in schools. The final chapter contains a discussion of the literature and empirical research regarding computer technology, pedagogy, world view, and the Calgary Board of Education. Adapting to Change: Social Milieu Thirty-five years ago, as Bob Dylan was singing "...and the times, they are-a changing..." to crowds of adoring fans, we were living in optimistic anticipation of the changes that the high-tech "space age" would bring. While we knew (even without Bob Dylan's lyrics or cartoon images portrayed by the Jetsons) that the future held great promise for change, it was not possible to fully understand the profound breadth, depth, scope and rapidity of the transformation that we would witness and participate in during the latter part of the twentieth century. Global Milieu These changes -- the magnitude of which will perhaps be almost immeasurable in terms of the immediate and long term impact upon us as individuals and as a global society -- are seemingly catapulting us into the future with such acceleration that even Star Trek's (The New Generation) Captain Jean-Luc Picard would be suffering from jet-lag (or rather starship-lag). Although it may be an exciting ride, we may become so caught up in the experience, particularly within "industrialized" countries where the ride is -- generally speaking -- much more fun, that we may not have time to become fully cognizant of these changes or critically analyze the potential effects of not only our present speed but also our direction. Many, perhaps most, of these changes have been driven by technological advancements and have had variably positive or deleterious effects on the Earth and/or its inhabitants. It should be noted that in many cases, evaluation of technological outcomes is in the eye of the beholder; for example, foresters see primarily the positive results of clear cut logging while environmentalists see primarily the disastrous consequences of deforestation. Taking into consideration the potential repercussions of technology unbridled, the ability to make informed decisions and critically analyze potential effects is of paramount importance to our future existence. This is particularly important at this time since blind optimism of the '60s has become overshadowed by a variety of potential global disasters. **Education Milieu** Our education system is, of course, deeply entwined within the various changes taking place on our planet, and as a result, it is undergoing its own sort of overhaul in an attempt to keep abreast of and participate in the most recent scientific developments, in particular the role of computer technology. As classroom teachers, we are faced not only with quickly adapting to changing-roles initiatives, variations to existing curriculums, new directives, site based management, school reform, accountability, the "back to basics" movement (if this can be considered a movement), charter schools, Christian fundamentalist schooling, etc., we must find the time and energy to consider these new trends from a philosophical point of view. This should be an integral aspect of teaching, but due to time and energy constraints it is almost becoming a luxury. Because it is virtually impossible to deal thoroughly with all of these issues, the tendency may be to accept all of these new directives unquestioningly. When daily dilemmas in education are combined with the myriad issues we now face on the planet (atmospheric changes, depletion of non renewable resources, desertification, extinction of species, waste management, poverty, unemployment, food shortages, accelerated population growth, etc.), it becomes even more overwhelming. Maintaining the status quo, trusting the "system", which includes believing that "the experts" are the only ones with the power and intelligence to figure it out, and not asking questions of a philosophical nature can become a very appealing alternative. A BRIEF HISTORY OF TIME With the foregoing as backdrop, some very serious questions must be articulated concerning the role of the education system and, more specifically, the responsibilities of classroom teachers in preparing our children to survive in our complicated, rapidly changing world. Powerful forces have been driving powerful changes, much of which centres around technological advances. Are we unconsciously being swept into a virtual world and thereby supporting rampant globalization? When questions arise within us, do we have any power or influence at an individual level to question those corporations, political institutions, or bureaucratic organizations that seem to have the greatest control and the most to profit? Is it our responsibility to be advocates for others whom we see being hurt by sweeping policies that are obviously not for the good of all? And how did we get here in the first place? **How Did We Get Here?** **In the Beginning...** Since the very beginning of formalized schooling (the origins of which can be traced back to 3,000 B.C. when the first systems of written communication were developed by the Sumerians and the Egyptians), questions have been raised about the purpose of education. Early methods were not unlike our present day, involving instruction that would assist in creating productive citizens skilled in reading, writing, and arithmetic, as well as allowing for the opportunity to learn for personal enjoyment and gratification. Education remained primarily within the domain of the more privileged in society. As time progressed, the world experienced tremendous growth in technological innovations and schooling became more focused on knowledge, particularly within the realm of science. Widespread public education developed during the Industrial Revolution out of the need for an educated workforce. An educated workforce was part of the infrastructure provided at public expense to foster the growth and implementation of technology. Along with the development of instrumental knowledge was "...a liberal and critical dimension that resists subordination to the market and aspires to other -- intellectual, imaginative and aesthetic -- truth and values." (Robins and Webster in Gutstein, 1999; p. 210). The latter form of knowledge was of secondary importance and tolerated as long as industry did not have to pay (ibid). While scientific thought brought many positive changes to the world, in a roundabout way, it also contributed to our current anthropocentric, mechanistic, fragmented view of the world, which still pervades the thinking of many people today. Generally speaking, a sort of change in educational thought occurred around the turn of the 20th century. The emphasis shifted away from the institution to the child, and the development of the "whole child" began to enter educational vernacular. The intention was to balance the acquisition of skills with personal fulfillment. Value was placed on inner creativity and there was recognition that learning, to some degree, comes from within. This approach has come under attack in recent years, based largely on misconceptions. Current debates about the "who, what, why, and how" of education continue with as much fervour as ever in the past, and even though many researchers have attempted to articulate the "right" approach, we still appear to be some distance from reaching consensus. Cummins and Sayers (1995), in a comprehensive study of Global Learning Networks, also address the topic of school reform, and provide a succinct summary and explanation of the reform debate. Theirs is a balanced view based on the necessity of certain revisions within the realm of education in order that we fully recognize our global interconnectedness, interdependence, and responsibility towards one another and the Earth. Although it is true that the debate is far from resolved, everyone must surely hold one thought in common about the purpose of schooling. As summarized by Cummins and Sayers, "Public schools serve the societies that fund them, and they aim to graduate students with the skills, knowledge, and values necessary to contribute to their societies." (p. 82). Their explanation for the general lack of consensus about how to arrive at that common goal is that we are in the midst of complex and massive changes — on a personal, societal, and global level — and we cannot predict with any certainty what exactly will be required from this generation of youngsters — or of those to come. Cummins and Sayers explore these "behind-the-education-scene" changes on three levels: cultural, economic, and existential. Cultural Diversity Within the area of cultural diversity, Cummins and Sayers present the thought that one of the great difficulties we face has to do with conflicting views about whose account of history represents the truth, i.e., whose world view represents reality. Until quite recently, it was generally accepted in North American schools that the Western world represented truth, and all others were sublimated. Minority groups, demanding that their voices be heard, have brought this attitude of dominance and subjugation into question, creating dissonance and defensiveness within major power brokers. One of the results of this, according to Cummins and Sayers, has been a backlash of rhetoric implying that proponents of multiculturalism are "...a serious threat to social cohesion and national unity." (p. 82). Economic/Political Climate The second new reality centres around economics. As outlined by the Commission on Global Governance (1995), the global marketplace carries with it a notion of competition. This further trickles down into the education system and its alleged failure to adequately prepare young people for entry into the workforce. The welcoming of agencies such as the Conference Board of Canada into educational discourse attests to our growing preoccupation with promoting skills development as a major purpose of twentieth/twenty-first century schooling. In a 1992 publication, the Conference Board outlined goals for elementary and secondary education as they addressed the issue of "employability skills." A key aspect of this publication is their belief that we should: "...engage business and education in partnerships that foster learning excellence and thus ensure that Canada is competitive and successful in the global economy." (p. 3). This underscores our collective belief in capitalism, competition, and individualism. Many people -- politicians, business people, and -- perhaps most notably parents (our "partners in education") -- have "bought into" this global economic reality, if indeed it is the reality. This has, in turn, placed a great deal of pressure on the education system to respond with curriculums that satisfy the cries for more emphasis on skills required for the workplace. **Existential Realities** In discussing existential realities, Cummins and Sayers explain that they are referring to our precarious relationship to both our physical and social environment. They underscore the impact (and fallacy) of former U.S. President George Bush's "new world order." Even with nuclear disarmament and the crumbling of the Berlin Wall, global peace and security for all seems anything but close at hand. "Persistent economic malaise, unemployment, and inner city misery in Western industrialized countries...mock the declaration of a new world order." (p. 83). At a time that we need to be addressing these pressing issues more than ever, they state that the curriculum in most schools has been so sanitized that opportunities for such discussions rarely arise. They reveal that the reason such topics are omitted from standard curriculum derives from a desire to protect the innocence of children, which plays perfectly into the hands of business and politics. "Issues such as racism, environmental pollution, genetic engineering, and the causes of poverty are regarded as too sensitive for fragile and impressionable young minds...such issues invariably implicate power relations in the domestic and international arenas." (p. 115). To gain some understanding of how these attitudes persist in education, it is necessary to explore the instructional and social assumptions that underlie different orientations to pedagogy. Information for this is drawn primarily from the work of Cummins and Sayers. **PREVAILING PEDAGOGIES** Cummins and Sayers provide a brief yet succinct explication of three prevailing streams of pedagogical thought that influence curriculum development: traditional, progressive, and transformative. What is different in their analysis is their inclusion of transformative pedagogy. **Traditional Pedagogy** They explain that proponents of traditional pedagogy are in favour of a return, essentially, to the "three R's" or "back to basics" (i.e., a re-emphasis on rote memorization embodying phonics skills, spelling skills, and math skills in an isolated sense). This was borne largely out of a reaction to a perceived decrease in academic standards, based at least to some extent on misleading or conflicting information regarding test scores and reports from businesses (see Barlow and Robertson, 1995) and a desire on the part of many parents to know the performance level of their children, based on competition in society. They mention the work of writers such as E. D. Hirsch (1987), who claim that not only does our society desire a traditional approach to schooling, but so do our children. In response to this mode of thought, Cummins and Sayers cite opposing research (e.g., Sirtonik, 1983; Brophy, 1992) that criticizes this approach because of stifling not only creativity but intellectual reasoning. Traditional pedagogy is based on measurable outcomes of skills — in other words, success is measured through standardized testing. This often leaves little room for demonstrating true understanding of subject matter or for critical thinking. Furthermore and implicit in traditional schooling, according to Cummins and Sayers, is a cultural transmission that omits or marginalizes all but the dominant culture — in other words, it promotes xenophobia, which in turn promotes racial intolerance and bigotry. In their discussion of the work of Moffett (1989), they include a poignant quote to illustrate the limitations involved in following such restrictive methods in education: "...transmitting any heritage entails selecting some ideas, frameworks, and values and excluding others. Exclusion is built into the very idea of education as cultural transmission...[which in turn]...practically defines ethnocentricity — the failure to identify outside a certain reference group..." (p. 148) In this way, traditional approaches control the knowledge, skills and attitudes, thus "...maintaining identity across generations — ensuring that the next generation thinks like ours." (p. 147). In other words, a main aim of traditional pedagogy is indoctrination. This presents a very narrow frame of reference and one in which society supports the status quo. In addition to maintaining the status quo, there is a deeply embedded attitude within traditional pedagogy that places human beings above all else on the Earth. This anthropocentrism comes at a time when recognition of our interconnectedness with all species is becoming so vital to our continued existence on this planet. **Progressive Pedagogy** The second approach discussed by Cummins and Sayers is progressive pedagogy, which involves whole language, process writing, small group cooperative learning, experiential learning, etc., in other words, what we see occurring in most public school classrooms in Alberta at the moment. This form of teaching and learning had its beginnings in the early part of this century with John Dewey. Dewey believed that in order for learning to be meaningful to children, it needed to be experientially-based. Around the same time, the Russian psychologist Lev Vygotsky was formulating his theories about learning. He not only believed that direct experience was vital to learning, but that knowledge is created by interpretations of the world based upon past experiences and interactions in the world (Vygotsky, 1978). In other words, knowledge is constructed. The common term for this -- and one which is heard frequently in education -- is "constructivism". Where traditional pedagogy involves the *transmission* of information (which is, in a sense, exclusive and top-down, from the "expert" to the neophyte), progressive pedagogy involves the *creation* of knowledge (which is inclusive: learning is based on one's own direct experience and involvement with the world). Within the Calgary Board of Education (which is legally required to work within the framework established by Alberta Education), our practise is informed by progressive pedagogy but it borrows from traditional pedagogy: i.e., our emphasis is primarily on constructivist principles with a recognition of the importance of the development of certain essential skills. Teachers are present to guide children through activities as they progress at their own rate through the learning continuum. The Calgary Board of Education’s Quality Learning Document states that: “Teaching practices designed to engage learners and foster independent thinking will prepare students for an increasingly competitive and complex world that requires different kinds of competencies and attitudes.” (p. 5, Appendix IV) In an academic sense, this is a practical, balanced approach that values each student as a learner and an individual. However, as Cummins and Sayers point out, the multicultural component of progressive pedagogy is limited to celebrating and acknowledging diversity. In their words, it is "...allied with multicultural education, [but] the focus...is limited to celebrating diversity — [which] does little to challenge inequities of power and status distribution..." (p.153). Progressive pedagogy recognizes that global change is so rapid that as individuals we are faced with problems that have not occurred before, and that we must develop the knowledge, skills, and attitudes necessary to deal with our changing world. However, this is carried through in a limited sense. We acknowledge that knowledge has a shelf life, but our goal is lifelong learning. What progressive pedagogy excludes is posing the bigger questions, the questions that ask whether these changes are for our collective good. In other words, we are seeking to develop lifelong learning, not necessarily lifelong questioning. **Transformative Pedagogy** Transformative pedagogy requires learners to use critical inquiry and to develop skills to analyze social issues at a deep level. Its aims are democratic participation and social action. It is this kind of pedagogy that is essential in effectively using Global Learning Networks. A transformative approach embraces many elements of progressive pedagogy (constructivist principles, higher order thinking, collaborative problem solving, effective communication, etc.) but it takes these objectives one step further. Cummins and Sayers explain that transformative pedagogy provides: "...an explicit focus on social realities that relate to students' experience...founded on principles of democracy and social justice...oriented to...giving [students] the academic and critical literacy tools they will need for full participation." (p. 154-155). Within progressive pedagogy, there are windows of opportunity to achieve these goals, should teachers choose to interpret current thrusts in education in this manner, but they possess the awareness in the first place. What is different in transformative pedagogy is the explicit preparation for responsible citizenship through social justice, inquiry, and the examination of existing power structures and their impact (positive and negative) on all members of the global society. It also opens up critical inquiry into the role of technology, in particular information technology. Prior to addressing the role that technology (specifically computer technology) and transformative pedagogy may play within global education, it may be helpful to explore some very basic questions surrounding global education -- what it is and why it is important. GLOBAL EDUCATION: SEEKING A PRELIMINARY DEFINITION In seeking to provide answers to what comprises global education, we find many commonalities with transformative pedagogy. For example, Choldin (1993) tells us that global education: "...provides an awareness and critical understanding of global issues...[including] protection of human rights, maintenance of peace and security, and preservation of the environment." (p. 28). Implicit within this is an underlying belief that we are globally interconnected and interdependent. Through fostering this kind of thinking, students are provided with the opportunity to take ownership for conditions beyond their immediate surroundings and become empowered to enact the kinds of changes that will move us towards a more peaceful, safe existence. Smith (1992) provides some very clear guidelines concerning the "what" of global education. She describes learning situations designed to be "...less concerned with the accumulation of vast amounts of information...[with students] examining the different interpretations of reality, detecting bias and recognizing complexity." (p. 36). She is quick to point out pitfalls such as a "tourist" approach, which tends to be oversimplified and superficial, which in turn tends to further perpetuate biases and misunderstandings. The tourist approach may be precisely what Cummins and Sayers were referring to within their description of progressive pedagogy — i.e., that we highlight certain areas (e.g., multiculturalism), but generally accept the status quo. Smith includes a description of content that is both global, representing many points of view and as many voices as possible, and connected, emphasizing interrelationships and reciprocal relationships. She describes lesson plans that would promote an awareness and knowledge of global issues (such as human rights, peace, environmental concerns, etc.), as well as an opportunity to critically analyze issues such as domination and exploitation. At the same time, these lessons would allow a dialogue to develop in the classroom that would raise questions and empower students to participate actively in the discovery of solutions. In addition to making clear its goals and objectives, a description of global education also serves to distinguish it from multicultural education. The work of Zachariah (1992), in his explication of development education and multicultural education, also serves to shed light on the differences and similarities between global education and multicultural education. In addition to this, his work presents options and suggestions for merging the two to maximize the benefits of both perspectives. He explains that a major point of divergence between development education and multicultural education involves the process by which each may be presented. While multicultural education focuses on acceptance of others and their respective cultures, development (and global) education encompasses the development of a deeper understanding of the intricacies involved in transforming the traditional structures that constrain certain classes or groups of people. This does not necessarily exclude "development" issues (such as more equitable distribution of property, power, and money) that are relevant to indigenous groups within our own country, but overall, global education focuses on comprehending power structures and inequities that occur beyond and across international boundaries. As Zachariah furthermore points out, we must exercise caution in our presentation of "fact": our sources may have hidden biases or distortions that misrepresent reality, which could result in deepening the misunderstanding and further alienating those who are "different." In addition to this, we must be very careful not to inadvertently employ an insensitive, patriarchal approach to "helping" others. This kind of thinking is more hurtful than helpful. One example of this is the application of Western terminology to other countries. This is illustrated by an article in *The New Internationalist* (June, 1992), wherein the author unthinkingly remarked that a group of local people (Tepitans) living in Mexico City were "...terribly poor." (p.7). The response to this from a Tepitano spokesperson was a mixture of offense and anger, and brings to light the meaning of poverty, and whose "reality" of poverty is the appropriate one. In this case, the label derives from an industrialized country's definition, based solely on economic wealth. To the Tepitano man, poverty had nothing to do with monetary wealth. How global education, transformative pedagogy, and computer technology fit together becomes clearer when one places our education system within the context of the entire globe. A FINAL COMMENTARY: HOW "WE" FIT WITH "THEM" To ease our collective consciences, we can find some comfort, real or imagined, in the attempts of the scientific community to seek solutions or alternatives to some of our global problems. These, in turn, have spurred the appearance of a variety of policies, initiatives, and organizations that superficially appear to have placed us well on our way to sustainable growth on the Earth (notwithstanding that today's scientific developments may constitute tomorrow's tragedies). Alongside these formal initiatives, we have seen the rise and/or acceptance of numerous grass roots movements all over the world, to which we are increasingly lending greater credibility as we recognize the simple wisdom inherent within such common sense practises as conservation. preservation, and cooperation. These actions, either in concert with, or in opposition to science, have made us furthermore aware of the enormity and importance of a more inclusive dialogue on sustainability. Thus, as we probe more deeply into these global issues, it becomes quite clear that resolution of these problems involves not only specifically the scientific community (from which we desperately hope -- but cannot help wonder *if* -- our political leaders and policy makers receive input), but our entire world community in general. The positive impact of grass roots movements provide very real evidence of the potential capacity of "ordinary" people to effect positive change. Interestingly, much of what we are learning -- or perhaps relearning -- stems directly from those communities we have erroneously labeled "primitive," "backwards," or "developing." The knowledge possessed by indigenous peoples is a knowledge that represents thousands of years of wisdom, and, as we are discovering, must not be ignored. This dialogue must continue to progress amongst scientists and lay people, industrialized nations and poor countries, dominant and/or governing elites and minorities. In order to provide the best possible opportunity to alleviate the tremendous stresses we have placed on the Earth, the integration of all fields of study will become necessary. It has become abundantly clear that we must regard the world as a unified whole comprised of interconnected, interdependent parts. "Our" issues in North America are "their" issues in Asia. Acid rain from Europe has appeared in the food chain in the Arctic. Decimation of the rain forests has contributed to the atmospheric changes which extend across the globe. We have disrupted a delicate, complex ecosystem that can no longer withstand our destructive bent. Whether we want to believe it or not, we are collectively involved in the changing world one way or another, either by being active participants (which includes a range of activity from direct involvement in the development of new technologies, policies, or directives to silent consent through consumerism and acceptance), or by being passive onlookers (who may have some objections but feel too powerless or apathetic to have their voices heard). Stated another way: "if we're not part of the solution, we're part of the problem". Of course, if we happen to be amongst those who fall into the latter category, we can console ourselves with the knowledge that there are other advocates for the less privileged, others who have more power, money, time, etc. to direct towards the variety of causes that span the globe. Even those who live in very privileged circumstances, either as individuals or as a larger society, cannot be expected to solve all the world's problems, because, after all, there are others (i.e., experts) who are much better qualified to analyze, interpret, and act upon these problems. It is not that we may be unwilling to inject whatever time, effort, and resources we can into the preservation of the planet, but rather that we become paralyzed by the incomprehensibility of these vast global issues. The task that the education system faces is indeed significant. It is incumbent upon us as teachers to assume a leadership role in promoting an atmosphere within our classrooms that will be conducive to development not only of the knowledge necessary for survival in the twenty first century, but the skills necessary for working cooperatively together (planet-wide) to solve these complicated problems and heighten our awareness of our true place in nature. **ADAPTING TO CHANGE: ALBERTA'S RESPONSE** "...events in Alberta are...part and parcel of a host of world-wide economic, political, technological, and other changes generally called globalization...[this] assumes that competitive advantage in the global economy goes to the country with the best-educated workforce." (Harrison and Kachur, 1999, p. xvi-xvii). The Alberta government, in 1995, stated that the purpose of public education is to "...develop critical thinkers who are self reliant, responsible, contributing members of society." In response to what Denis Herard (1996), MLA Calgary-Egmont stated as the wishes of Albertans, our provincial government, in conjunction with Alberta Education, have planned on making Alberta a national leader in technology integration in Canada. This is echoed in the words of the former Education Minister Halvar Jonson (March, 1996) when announcing that five million dollars would be allocated in 1996 to give every school in Alberta access to network services with an additional forty million dollars to be implemented for technology over the next three years. To quote his words, "Technology has the potential to improve student learning, improve access to learning resources and improve teaching in our schools. The funding will mean more and better computers for students and access to network services for every school in this province." (1996, p. 2). It is completely evident, then, that technology, specifically computer technology, in education has become a central focus and theme in this province. Embedded within this is a progressive pedagogy with some emphasis on skills development that will enable a perpetuation of the status quo: we are living in an individualistic, competitive, capitalist-based society. **Alberta Education: Information and Communication Technology** Our education system is undergoing its own sort of overhaul in an attempt to keep abreast of and participate in the most recent scientific developments, in particular the role of computer technology. Alberta Education’s (1995) mandate is: "Education is responsible for ensuring that all students have the opportunity to acquire the knowledge, skills, and attitudes needed to be self reliant, responsible, caring, and contributing members of society." The response of Alberta Education has been to mandate the development of computer skills into the existing Program of Studies. The "Information and Communication Technology" document has undergone a number of changes over the past several years and it is now available in its final format on the Internet. The most recent hard copy that was available in our high-tech school was dated 1997. It is in this copy that we find the background, underlying principles, and framework overview. (These are not available on the Internet site.) What follows is a summary of the background to the current "Information and Communication Technology" component in Alberta curriculums (see Appendix III). **Background** According to Alberta Education, the primary goal in the inclusion of computer technology in schools is based on: "...the knowledge, skills, and attitudes that will serve [students] well for entry-level work, for further study and for lifelong learning, and that will help them become inquisitive, reflective, discerning and caring persons." (p. 1). To develop the specific technology learning outcomes that would be necessary to achieve the abovementioned, Alberta Education consulted numerous parents, teachers, community members, and employers while also conducting a review of technology curriculums from around the world. Through this process, a plan was developed that addresses not only current programs of study, but also anticipates what students will need in order to adapt to changing technologies and the changing world (p. 2). **Underlying Principles** Alberta Education refers to the underlying principles of this document as being specific to information, communication, and multimedia technologies. These are based on skills development (with a progression from the simple to the more complex), and are to be embedded within existing programs (language arts, mathematics, science, social studies, and career and technology studies). Framework Overview Technology learning outcomes are organized into three main categories. "Foundational Operations, Knowledge, and Concepts" have to do with understanding technology, ergonomics, skills, and the moral/ethical use of technology. The second category, "Processes for Productivity" focuses essentially on skills (e.g., keyboarding, data organization, multimedia composition, etc.). The third section, "Communicating, Inquiring, Decision Making, and Problem Solving" involves such aspects as information retrieval, critical assessment of information, and problem solving. As such, the technology learning outcomes fit very snugly within progressive pedagogy. Calgary Board of Education: Quality Learning Document In consultation with parents, students, school staff, and the community, the Calgary Board of Education developed the following Statement of Purpose over the 1995-1996 school year: "The Board acts as an advocate for every student to have an equal opportunity to become a competent, productive and self directed citizen. The Board acts as an advocate for every school to have the best resources to assist all its students to be the best they can be." (p. 3, Quality Learning Document, 1999). The Quality Learning Document provides the basis for what is to occur in Calgary classrooms. It is organized into five broad areas or understandings with related conditions (to be created by teachers) and indicators (examples of behaviours to be exhibited by students) that lead to specific learner outcomes (see Appendix IV). Briefly, these five understandings state that: - Learning requires purposeful involvement (i.e., students are engaged in learning) - Knowledge is constructed within a climate of inquiry (including metacognition and building connections) - Clear expectations and relevant feedback are needed (standards of achievement are clearly articulated) - Interpersonal relationships are crucial to the learning process (in a spirit of empathizing with others) - Diversity is valued within a responsive environment (respecting others' rights, sharing beliefs) Specific achievement outcomes in curricular areas are mandated by Alberta Education. The Calgary Board of Education has incorporated and expanded this mandate to include "Significant Learning Outcomes", which are designed to assist in the development of: - Responsible citizens (includes valuing their own culture and the culture of others) - Self-directed learners (self confident, life long learners) - Effective communicators (along with demonstrating competence in numeracy and in scientific, computer, visual, and media literacy) - Collaborative team players (aware of, appreciate, and accept cultural and personal differences) - Critical/Creative thinkers (access, analyze, and synthesize information) We see that this constitutes a climate for progressive education. There could be opportunities for transformative pedagogy, but these are not brought to the forefront, nor are they made explicit. The purpose of this chapter has been to set the stage for teaching and learning in Alberta schools based on a multidimensional perspective of our changing world. The introduction of computers into elementary classrooms appears driven by a technological imperative, similar to their adoption into society at large. Application of new technology is a given. Critical analysis often appears as an afterthought. The following chapter will provide a broad survey of research into computer technology and its potential as a helpful or harmful tool. CHAPTER TWO COMPUTERS: BENEFITS, USES, MISUSES, AND MISGIVINGS This chapter is divided into two broad sections: the positive aspects of computer use in the classroom will be discussed first followed by some of the negative aspects. Since these issues involve the use of computers in general with respect to classrooms, they form a backdrop that encloses Global Learning Networks. COMPUTER USE: BENEFITS AND USES This section is divided into three key areas that generally incase research surrounding computer use. These include: Motivation, Cognition, and Software. The latter topic is further subdivided into two basic types of software: skills-oriented programs and interactive programs. Comments involving the Internet are included in this latter area. It is difficult at times to separate these topics because they are interrelated and overlapping; however, for ease of readability they have been loosely placed into these categories based on the primary focus of the original study. Motivation and Cooperation A crucial aspect of learning is motivation. Committed teachers and educators are ever vigilant to discover new ways to capture and heighten curiosity in any given subject area. The claim is often heard that computers offer an ideal medium: not only will students learn the required curriculum, they will be excited about using this medium. Students are furthermore offered the opportunity to work collaboratively, which can enhance their eagerness to undertake new projects. Computers in and of themselves cannot be the sole factor that influences motivation and learning – at least not for any prolonged time. High quality software is a vital link, of course, in combatting the novelty effect. As is the case with every topic connected to computer technology, research to support the motivational powers of computers is immense. Computers have been found to enhance motivation on several different levels. Hay (1997) states that the use of "...computers as a motivational tool for students is obvious. If the students can use the computer, the activity immediately becomes less tedious and more interesting" (p. 68). This statement seems to particularly hold true when computers are used in a collaborative setting. Perlmutter, Behrend, Kuo, and Muller (1989) report that children were influenced by peer interactions on both motivation and learning. When working in a cooperative situation, retention and increased focus were noted. This latter effect (i.e., on task behaviour) was also noted in the research of Shade, Nida, Lipinski, and Watson (1986), Capper (1988), Dwyer, Ringstaff, and Sandholtz (1993), Clark (1992), and Stein (1987). Nastasi and Clements (1993) believe that the cooperative element in the use of LOGO positively impacts motivation. Computer based instruction is believed to allow children to become directly involved in their learning without the embarrassment that sometimes accompanies making mistakes (Apostolides, 1987). Children also receive immediate feedback and gain control over their own learning. The interactive nature and visual appeal are cited as highly motivating to students. Yang (1991-1992) found that highly interactive simulation software enhances intrinsic motivation. Sivin-Kachala and Bialo (1995) report that technology is a catalyst for improved student achievement, self-esteem, and teacher and student interactions in a learning environment that is open, flexible, and student-centered. Peck and Hughes (1997), observing computer projects involving grade one reading and language arts, found that students contributed to group inquiry and became directly engaged in assigned activities. A study by Bergin, Ford, and Hess (1993) found that kindergarten children worked best when they had set a common goal, leading them to conclude that the social and motivational effects of using computers are very positive. Because the software used in the study was primarily skill oriented, they suggest that these findings may not generalize to other applications. Ryser, Beeler, and McKenzie (1995) explored eight grader's self concept and motivation within a computer based constructivist environment. (i.e., one that would enhance the development of learning from the "inside out," allowing children to actively construct their own meaning as opposed to being passive recipients of knowledge). After comparing two groups, one of which was involved in a computer based learning situation, the other of which was not, they found that students involved in the computer learning group had higher self regard. They conclude that constructivist learning situations using technology improves students' motivation. Computer-based technology has been found to foster cooperation amongst heterogeneous ability groups in elementary and junior high school students (MacInnes and Kissoon-Singh, 1996). Tierney et al. (1992) found that students become independent and collaborative problem solvers, communicators, record keepers, and learners when using computer technology. Repman (1993) found a positive correlation between collaborative computer based learning and academic outcomes. A more complete summary of research in this area can be found through Interactive Educational Systems Design. An analysis of one hundred and seventy-six studies led to the conclusion that computer technology improves motivation, achievement, and attitudes towards learning (1996, in Calgary Board of Education Draft Technology Plan). Cognition With the recognition that learning involves a great deal more than rote memorization and drill for skills, much attention has become focused on classroom learning activities that permit, promote, and provide experiences that flow with constructivist principles. In reviewing elementary schools, Means and Olson (1995) discovered that in classrooms where technology was supporting constructivist goals, both students and teachers were highly motivated and more likely to engage in deeper thinking about their activities. Mitchel Resnick, in his extensive research at the Massachusetts Institute of Technology (MIT), has long held the belief that advanced skills (such as reasoning, analysis, and synthesis) are not acquired through the transmission of fact but through the learner's interaction with the material, and that computers are an integral tool in the integration of curriculum content in schools. Rushkoff (1996) believes that traditional teaching methods are based on linear, structured, cause-effect formats. He points out that young people today have pushed through the boundaries of old value systems (representing order) and are accustomed to much more divergent thinking as is available on the Internet or from hypermedia. Healy (1998), in her discussion of child development, talks of the value of computer technology in providing "...cognitive ramps from the concrete to the abstract..." (p. 273) provided that children are at an appropriate level of maturity and that the technology is well designed. Papert (1980) believes that computers can speed the process of cognitive development, permitting the transition from concrete to abstract thinking at an earlier age. He furthermore thinks that computer simulations allow children to work with much more sophisticated concepts than otherwise. LOGO, a programming language for teaching children about computers that had its beginning with researchers at MIT (in particular Seymour Papert) in the early 1960s, lays claim to being an avenue for explorative learning. Forester (1987) explains the logic behind LOGO. Children learn to manipulate a "turtle" (a drawing device), giving them a sense of power over the machine, while at the same time developing intellectual structures that are based on mathematical concepts. In this way, computers can change the way that children learn in general. Walsh (1994), in his extensive literature review of LOGO's contributions to learning, cites numerous studies (supported furthermore by his own teacher-researcher findings) that assert its effectiveness in problem solving and higher order thinking across a variety of subject areas, including mathematics, geometry, science, language learning, and computer programming. Transfer of learning and positive interactions with peers are also reportedly positive effects of LOGO. Several challenges, particularly within the area of problem solving skills are chronicled in this particular review, but in consideration of all the evidence, he is convinced that the reported benefits of LOGO far outweigh the objections. He does recognize the importance of further research, and of the need for competent guidance on the part of teachers. Nastasi and Clements (1993) report that LOGO affects higher order thinking due to the motivational and social processes inherent within collaborative computer learning. Clements and Meredith (1993) review LOGO research, finding that it is an effective tool for thinking and learning. Fletcher-Flinn and Suddendorf (1996) tested Papert's theory that computers could change the way children think. They examined the relationship between computer use and the development of metacognitive abilities in preschool children and concluded that, due to specific tasks, computers hasten the development of representational ability, which in turn creates maturity in social encounters. They presented questions to children such as, "Do you know what the weather will be like tomorrow? When will you know? Have you known for a long time, or did you just learn it today?" They discovered that children who had computers at home had higher ability to dissociate past from present knowledge, provide information about the future (e.g., they could say what the weather would be like the next day), and to hold a theory of mind. Others have explored the place of computers within the domain of cognitive theory. Knight and Knight (1995), in their examination of computers in the primary classroom, acknowledge that children need to develop skills to enable development of critical analysis, problem solving, and reflection. Their belief is that computers can have a valuable role in teaching children to think. The use of computers as means to develop critical thinking skills is reflected also in the work of Farah (1996), who underscores the necessity of applying these skills to evaluate the quality and integrity of the vast amount of information that we are currently presented with on a daily basis. Database instruction has been found to promote critical thinking (Ehman, 1992), higher order thinking (Ennis, 1993), and improved performance in problem solving (White, 1987; Casey, 1997, and Norton and Resta, 1986). McNeil and Nelson (1991) summarized sixty-three studies investigating the cognitive achievement effects of interactive multimedia instruction. All of these studies found significant, positive learning effects. The role of the teacher was of particular importance to these successes, and it was noted that teacher training was crucial. An exploration of studies researching the effects of computer programming on cognitive outcomes revealed that students who had been taught programming scored higher on cognitive tests than students who had not. Students using word processing were also found to demonstrate higher levels of achievement than equivalent students writing without word processing (Sivin-Kachala and Bialo, 1995). Owsten and Wideman (1997) concluded from their research that students using word processors produced writing of a higher quality and quantity than students who did not have high access to computers. This was also found by Nichols (1996), who initiated a comparative study of creative writing in elementary school. A belief is emerging that computers have the potential to cater to a variety of learning styles because of their versatility in offering a variety of media from which to learn and express themselves, along with providing the flexibility to give children choices other than pen and pencil if they have difficulty with motor coordination at an early age (Coghill and Wideman, 1996). Analytic learners may experience higher success from open ended software, whereas non analytic learners may benefit more from tutorial, drill, and practice software (Post, 1987; Macgregor, Shapiro, and Niemiac, 1988). Lee and Lehman (1993) found that passive learners improved significantly through the use of instructional feedback cueing from computer programs. Chisholm (1995), in a study to determine equity and diversity amongst computer users, found that computer activities tapped into a variety of learning styles. For example, auditory learners could use sound effects as well as speak, record, and play back their information; kinesthetic learners, in the freedom permitted by technology, could move between computers, manipulate the mouse, or touch the monitor; visual learners had exposure to both pictures and text; and students could have the choice of working alone or in small groups. Based on early childhood principles that learning occurs most meaningfully when children have direct contact with their environment, Resnick (1998) and his associates at MIT present a '90s high-tech version of working with manipulatives. Computational capabilities can be imbedded into toys such as blocks, beads, and balls. These new "digital manipulatives" (i.e., robotics) can be designed by children, purportedly enabling them to learn concepts that previously were considered too advanced for them. He terms this blend of constructivism and construction "constructionism". COMPUTER SOFTWARE In broad terms, the preceding constitutes a general discussion of the alleged role of computers in cognitive development. The role of software is intricately tied into any discussion of computers, but immediately following is a review of some of the research that specifically addresses the benefits of skills-oriented software. This is then followed by an overview of the alleged benefits of interactive technologies. Skills Development Computer programs have long been used for purposes of remedial work and skills practice, with everything from reports of glowing successes to abysmal failures. The current trend undoubtedly involves the design of software that promotes higher order thinking (i.e., the ability to analyze and synthesize information) and problem solving skills, but straight drill-for-skill tutorial programs or CAI (computer assisted instruction) are still available, and those who use them report good success. A major advantage in utilizing such programs is the individualized instruction they allow. These systems provide an integrated hardware/software approach related to basic skills learning that is based on behaviourist theory. Mastery levels are set and students do not progress without first achieving the benchmarks. Success is, of course, limited by the software (i.e., there is immediate access to data, provided that the right kinds of questions are asked). Navassardian, Marinov, and Pavlova (1995) claim to have discovered on a specific program (MIKROKURS) that produced very positive results. In their comparison of traditionally trained and computer trained students, they found greater success (i.e., correct responses) amongst the latter group. While they concluded that "...application of the computer in properly selected points of the lesson course could provide the integration of the useful features of both the human and computer," (p. 120), they also admitted that the teacher is a vital component in the overall scheme of computer use. They stated that, although computers can offer a variety of learning information, they are not capable of creating a variety of approaches to learning in the way that effective teachers are able. The importance of teacher involvement was also underscored by Graham (1995). Similar discoveries were made by Wiburg (1995), who undertook a survey of literature regarding computer technology. Her work focused specifically on integrated learning systems (ILS). Wiburg's findings agree with Navassardian (et. al.) that computers can provide superior individualized instruction, but that teacher involvement is essential to the success of computer assisted learning. It was also noted that ILS work better for students in the upper distribution of the class. Wiburg believes that this is probably due to the ability of students at this level to monitor their own learning and more readily resolve any difficulties they may encounter along the way. Richey (1994) found ILS useful in improving achievement amongst underachieving urban students. This is also reflected in the work of Miller (1997). ILS software designed to improve math scores have been found to be successful (Alifrangis, 1990; Gilman, 1991; Clariana, 1994; Brush, 1997). The most common use of math programs is to improve basic skills; however, a study by Clariana (1996) proved otherwise. In this study, it was found that ILS software could — and did — have a positive effect on mathematics concept scores. Tutorial software that offers good variety appears to positively impact learning (Kolich, 1991). In a study investigating vocabulary enhancement, it was found that programs using only definitional information would not be as effective as those offering a wider variety of methods of presenting material for vocabulary development. (These findings would also surely hold true for language learning activities that are presented in “traditional” ways: it is unlikely to be very interesting to students to participate in purely rote activities and would be more likely to contribute to diminished interest and concentration levels.) “Intelligent tutor” is a type of software that apparently “learns” from the mistakes that the user makes and subsequently branches to the appropriate instructional strategy. Barker and Torgesen (1995) discovered that a computer program to enhance phonological awareness in six year olds who were struggling with reading was highly successful when compared to a control group that used non language computer activities. Tutorial programs that enhance second language acquisition are perhaps one of the most successful uses of computers. Shenouda and Wolfe (1996) mention several factors that contribute to the success of language-acquisition software utilized in a language laboratory (including English as second language, Spanish, and French): students work at their own pace; they focus on their own problem areas; they have access to instant feedback; they are unaffected by others' perceptions or progress (i.e., computers have built in privacy); the added hours of lab work fills in instructional gaps (limited hours of classroom instruction); scheduling is flexible; introduction to computers for older students requires minimal computer skills; and for younger students at ease with computers it is a familiar tool. These are, of course, valid indications of the general worth of language programs, but their preliminary analysis has demonstrated that results were statistically insignificant. They speculate that this could in part be due to a relatively small study population. Wang and Garigliano (1993) discovered that using a tutorial program to teach first year Chinese was highly successful when based on ongoing monitoring of student progress. Chisholm (1995) reports that ESL children could benefit from computer programs provided that teachers were on hand to guide their learning. The above constitutes a very thin section of the research on computer software. Although there have been many changes in software over the years, recent studies seem to support earlier works (i.e., many positive educational benefits are reported). Interactive Technologies: Multimedia, Hypertext, and the Internet Over the past few decades as computers have become costly household items, we have been part of a corollary vocabulary change. Some of these words are all new to the English language (e.g., CD ROM), some are adaptations of an existing word used to describe a completely different object (e.g., mouse), and some are quite familiar to us (e.g., text). It would be no small task to create a database containing this new terminology, but to list a few, we have: floppy disks, databases, files, menus, computer programs, scanners, e-mail, Internet, world wide web, digitizing, spreadsheets, software, hardware, LCD panels, multimedia, hypertext, hypermedia, etc. Some of these terms have almost become obsolete (for example, floppy disks are usually referred to now as simply disks), and others appear to be withstanding the test of time. Prior to scanning the research any further, there are several terms that may be in need of clarification: multimedia, hypertext, and hypermedia. "Multimedia" use involves utilizing computers in an interactive way to store and present text, graphics, photographics, video, and sound. "Hypertext" refers to files containing information that are organized in a nonsequential manner (i.e., the reader can explore the content in whatever order desired). Units of information are connected by "links". Links hold "chunks" that may be part of the content, or they may be a separate topic. This allows for connections and associations between related units of information. This is the critical point of departure from other computer software. Of all the technological innovations, hypertext is thought to most closely mirror the way humans think (i.e., our thought patterns are multifaceted, and multidimensional). The human mind operates by association. As one thought arises, it searches for previous knowledge or creates new neural pathways to accommodate new experiences. This nonlinear aspect makes hypertext highly appealing to teachers and educators. The term "hypermedia" involves software designed to facilitate the use of computers with videotapes, compact disks, scanners, and other equipment to create animations, graphic images, or "stacks" of information that can be assembled in various ways to help explore topics, develop thinking skills, and solve problems rather than just memorizing blocks of knowledge. (Source: Technology Learning Outcomes Document, 1997) Tierney (et. al., 1999), in their ongoing research with hypermedia, have found that linear, sequential learning activities prompt students to view reading and writing "regular" texts passively, focusing on the recall of ideas rather than higher level thinking such as exploration, generation, extension, and reconsideration of thoughts. Due to built-in nonlinear semantic networks, they believe that hypermedia can remedy this occurrence. As opposed to ILS, interactive technologies (such as CD-ROMs with hypertext capabilities) permit some control for the user over the sequencing of instruction, provided that the software is carefully selected since programs tend to be very expensive and the quality can be poor. Simulation software that allows students to manipulate simulated environments and use the available graphics to create new planets, develop unique ecosystems, design vehicles, explore the forces of nature, or plan cities from the ground up assist students in guided discovery and hypothesis testing (Collis and Stanchev, 1992). Brown and Vockell (1996) promote the potential for computers to achieve several functions which would otherwise be limited. In addition, they point out that computers can provide immediate access to information, thereby removing the possible frustration that comes with delaying or postponing “…a logical line of reasoning…” (p. 98). Their enthusiasm for technology almost leaps off the page as they add that multimedia work stations are “…punctuated with maps, original voice clips, and other visual images that create vivid impressions…” (p. 104). Whalley (1995) is another amongst those who believes that multimedia technology has the potential to significantly impact teaching and learning. Whalley’s work centres around the exploration of multimedia and its potential to pique imagination and spur “What if?” kinds of questions. Others (e.g., Yager, Blunck, and Nelson, 1993; Wills, 1994; Sharon, 1995), drawing on constructivist understandings, too, indicate that there are many positive benefits to multimedia software, asserting that educational software must go beyond mere presentation of information to actively involve and engage students at a deeper level. Findings by Riddle (1995) suggest that multimedia tools can enhance idea development and individual expression by adding greater description and unique perspectives. Rieber (1990) studied the effects of animated presentations of Newton's Laws of Motion on grade four and five students. He generally concluded that this brought some very positive results within their ability to visualize motion and that it appears transferable to other learning tasks. He also maintains that more research is needed to clarify the role and nature of animated instruction, particularly in other subject areas. Cronin, Feldman, and Prewitt (1992) submit a very enthusiastic summary of the creation of a high school video yearbook using multimedia (with the help of two media specialists). Not only did students thoroughly enjoy participating in this activity, claiming it to be a "...student's dream...bonding of students and computers is immediate..." (p. 282), they reported that they were required to utilize higher order thinking skills, synthesize information, develop advanced communication skills, and work cooperatively. Electronic books are touted as holding great promise for superior learning opportunities (i.e., superior retention, superior and/or quicker understandings, superior involvement, etc.). Oliver and Oliver (1996), generally in agreement with these findings, point out that these processes cannot occur with great efficiency without appropriate instruction first. In performing an electronic encyclopedia search about Jersey cows, they discovered with a class of twelve year old students that several factors influenced their relative success. Students' previous experience with computers enabled them to experiment more freely and discover more shortcuts, thereby achieving more sophisticated work, while others tended to use limited strategies, whether these were efficient or not. The implication here is: students who have access to computers outside of school are at somewhat of an advantage, while those not having access are at somewhat of a disadvantage (which may, paradoxically, present a very strong argument in favour of having computers in schools). Matthew (1997) claims that electronic books capture the attention of students and at the same time stimulate their imaginations. She found, however, no differences in comprehension between a group of children using CD ROMs and those using print versions of stories. **The Internet in Education** The Internet is a world wide network of computers that are organized in such a way that access to information located in libraries and at computer workstations around the world is available to anyone with the "proper connections". Wilson (1995) speaks of the possibilities of the Internet as being almost limitless (in almost utopian tones), with the potential for everything from receiving an online business course diploma to students working with real life scientific data and communicating with students in other cultures. He estimates that there are some twenty million users worldwide with membership (at the time of his study) expanding at a rate of ten per cent per month (p. 86). Moreover, he informs us that great growth will come for students from kindergarten to high school. Eagen High School, the main subject of his journey on the information highway, has the potential to publish a wide range of information on a home page, including lunch menus, sports team schedules, course offerings, curriculum projects, a video tour of the school, and it could even link to other such things as professional teacher associations, universities, etc. Interestingly, he mentions that thirty five per cent of United States public schools have access to the Internet, but only three per cent are connected (p. 89), citing funding as the greatest barrier. Perhaps Edwards (1995) has an easy solution to the high costs associated with computers. He speculates that the Internet could prove to be practical (and cheaper) schooling for everyone and predicts that this form of learning is the way of the future, so according to him, it may only be a matter of time before the funding machine figures this out. He foresees high schools as having at least one computer lab in the school and one away from the school. A high school education could then be received through a computer. An added advantage would be that "...disobedient, unmotivated students..." (p. 69) could be removed from the school to the computer lab off site, thus no longer bothering those who want to learn. Although his work is purely opinion, it is worth mentioning because it is an indication of how some (perhaps many) people would shape the education system. Seltzer (1995) discusses the use of the Web to celebrate talent and honour achievement. Hiltz, Johnson, and Turoff (1986) found that on-line communication produced more interaction and involved more exchanges between students than face to face interaction. They attributed this to the lack of human barriers such as criticism, demands for expediency of response, etc. To place this within the context of our own home territory, Alberta Education's publication, *In Focus*, has published some examples of how multimedia and the Internet are being used in Alberta schools. Many schools now have a home page, permitting communication with other schools in Canada as well as outside the country. A number of schools are partnered with schools in other countries, permitting joint research projects and shared learning experiences. Distance education projects also abound, offering correspondence courses in subjects such as Math 30 and Physics 30. There are numerous testimonials from teachers in several districts praising the wonders and successes of technology. Although this is by no means an exhaustive study of the vast amount of information on computers, it does serve to highlight some of the more salient aspects of computers in the classroom. The remainder of this chapter deals with some of the issues that have arisen as a result of numerous complaints, injuries, questions, and doubts surrounding computer use. This is based primarily on two works: *Failure to Connect: How Computers Affect Our Children’s Minds* (Healy, 1998), and *The Child and the Machine* (Armstrong and Casement, 1998). Also referred to at some length is the work of Sanders (1994). Although much of the latter focuses primarily on the negative effects of television, it is included in this section because of the close relationship between television and computers: they are both electronic devices that are changing the way we perceive the world. When other ideas are included, they are separately cited and acknowledged. In addition to the topics discussed in the first part of the chapter, this final segment will explore other issues including: Child Development, Social Implications, Health Concerns, Financial Costs, and the Information Age. **Child Development** Something that seems rarely taken into account when dealing with technological tools (including television) is child development. Software is now available for children at eighteen months of age, and many parents will undoubtedly believe that their babies will be given some sort of lead over other babies if they hurry out and buy it. Claims made by advertising agencies unfortunately do not come with supporting scientific evidence and many well intentioned parents fall prey to successful marketing ploys at the expense of more meaningful learning experiences for their children. **Principles of Brain Growth** Healy isolates six principles of brain growth, which are worthy of mention. First, it is widely acknowledged by psychologists and physicians alike that, along with certain genetic predispositions, children are also shaped by their surroundings. When children's brains are developing, cognitive structures are formed according to how they receive "training". If children are provided with experiences that encourage divergent thinking, they can become creative problem solvers. If, on the other hand, they are provided with situations that require yes-no answers or only one right answer, their chances of developing higher-order thinking skills are somewhat reduced. When you place this within the context of much of the computer software available on the market today, the adult world may need to exercise whatever higher level thinking skills they may possess and pose some serious questions to software developers. Second, she discusses "critical periods", or developmental stages that require certain kinds of stimuli to trigger the growth of cognitive structures. An example of this is language receptivity. Young children are "programmed" to speak, but they need other human beings to help them become articulate. Too much electronic media introduced too soon may interfere with some of these natural processes because of the two dimensionality of computers. What is necessary for normal development is direct interaction in a three dimensional world. Her third point centres around hemispheric integration. Neural pathways are created based on new experiences. When these new experiences involve accessing both hemispheres of the brain, new and necessary connections are made. Many computer programs activate the right hemisphere of the brain. What has been discovered through brain research is that the right hemisphere is also responsible for the "negative" emotions: sadness, lethargy, depression. What is not known is whether activities (like computer games) that access the right hemisphere can also trigger sadness or depression. A fourth consideration in brain development is the importance of emotional centres in learning. These centres are located in the brainstem, an area that may not be stimulated through computer use. Fifth, direct human interaction is absolutely necessary for language development. Not only do children mimic the sounds that adults make, they need to see facial expressions and they need to be able to ask questions. The last point that Healy makes concerning brain development has to do with constructing new knowledge based on prior experience. As children develop new understandings, they are creating mental patterns that they will continue to expand upon and refer to throughout their lives. She believes that most technology works in opposition to this process. The ability to perform abstract thought can only occur after children have had direct experiences with their environments. She cites the work of Thelen (1995), whose research led to the conclusion that higher level cognition is dependent upon full sensory integration. Armstrong and Casement report similar findings about child development. In their exploration of the work of Butterworth (1997), they found confirmation that sensory experiences are known to stimulate the minds of babies. This phenomenon does not suddenly stop when children enter school or reach a particular age. Teachers need to be mindful of this matter before they forfeit sensory experiences for those that require abstract thought. The computer must not be ruled out as a potential culprit in detracting from real life. In response to claims that computer graphics can reduce the frustration that children experience by providing children with a tool that can more accurately record complex ideas, they point out that tactile manipulation of art materials is essential to the very act of producing art. In addition to that, they remind us that self confidence is based on successes that come from within ourselves. **The Importance of Direct Experience** Further proof of the importance of sensorimotor experiences comes from the work of Miller and Malamed (1989), who demonstrate that academic achievement is directly related to physical experiences not only in five year olds but in children up to thirteen years of age. Armstrong and Casement point out that keyboarding is often a primary focus in schools. Most children do not have the required eye-hand coordination until they are eight or nine years old. When Oppenheimer's (1997) research led him to a class of bilingual special education students from grades two to four, he discovered that the computer lab was a bustling, busy place. What he witnessed in this math lesson was: a child counting loudly to herself (above the noise of the other students), another with a piece of paper nearby to keep track of the math drill sequence, and several others who were using their fingers to count. The teacher said that computers were highly motivating to these children (and a very effective means to get exceptional behaviour at least on their designated computer day), but she also questioned the practicality of computer use: "...these kids still need the hands-on..." (p. 51), meaning access to concrete objects (math manipulatives) to achieve understanding of basic mathematics. Healy speaks of a young teacher from a private school who was hired to teach computer technology to children. As a business major, she lacked some very basic understandings of child development and sound educational practice. Her premise for getting children as young as four years old into a computer lab was based on a technological worldview that not only regards schooling as basis for employability but technological employability. Buying into this belief system places supreme value on computers (which carries with it a certain way of processing information; often reductionistic and linear) while at the same time unfairly devaluing other fields. Healy sums it up well: "Some of the best jobs in the corporate and professional worlds still go to literature or history majors. Why? Because they know how to think." (p. 106) In his discussion of television, Sanders (1994) alerts us to the potential dangers in moving children away from direct experience with their immediate environment and towards electronic equipment. He quotes the all too familiar statistic that by the age of five, the average North American child has watched 6,000 hours of television programming. Violence and consumerism aside, another danger lurks: neuro-anatomists are beginning to believe that excessive media programming may actually interfere with normal development of the limbic system, which in part regulates the body's immune system. It is also the centre for emotional bonds and imaging. Hormonal secretions from the heart have been found to travel to the limbic system when children are involved in conjuring their own images during storytelling or reading. It is believed that this process assists in strengthening our natural defense against a variety of negative images (e.g., violence). Researchers cited in Sanders have not taken into account computer use but their findings should not be discounted or considered exclusive to television when both forms of "entertainment" create the same sort of imaginative vacuum. Related to this is the body of research that is increasingly coming to the forefront in education surrounding multiple intelligences. Not only has it been widely acknowledged that we should value intellectual strengths beyond just mathematics and science but that moral and emotional intelligence is related to academic success. Sanders furthermore believes that television has an insidious side effect, replacing emotional and psychological needs with consumer values. Self confidence develops as a result of interaction with the outer world. Intensive training from electronic media may interrupt the circuitry that otherwise naturally occurs. Television is always there as a prescription for boredom. Sanders mentions the work of child psychotherapist Adam Phillips (1993), who writes that being bored is necessary to a child's normal, healthy development. Babies need to be able to resolve their aloneness (even in the presence of their mothers), and develop the inner resource and patience to wait for something. Sanders believes that children do not learn to rely on their own inner voices but rather the patterned responses that television (and by extension, computers) provide for them. Furthermore, as television and computers take children away from contact with their real friends and families, the discussions that would normally occur are thwarted, thereby eliminating the all important opportunity to try out ideas, work through problems, and disclose to others (and themselves) their feelings and thoughts. It also allows for understanding that other people may have differing opinions, an important step on the way to tolerance. Cognitive Development In recent years, constructivism has become almost synonymous with cognitive development. Computer enthusiasts have been quick to latch on to this theory of learning, making claims that computer software and/or the Internet form a perfect alliance with constructivist principles. In some cases, computers are even touted as being able to hasten cognitive development (e.g., Papert, 1980). Burstein and Kline (1996, in Healy) make an important distinction between being functionally literate and digitally literate. Knowing how to use a computer or perform searches on the Internet does not necessarily include the ability to think. They point out that the most important skill for students to develop is symbolic analysis, which is the ability to understand multiple symbol systems: languages, mathematics, and the arts. It is widely accepted that cognitive development occurs along a continuum and that there are no hard and fast rules for the acquisition of developmental milestones. For instance, Healy tells us that between the ages of five and seven, the brain is able to reason more abstractly and acquire symbol systems such as words, mathematics, or computer applications. This is only the beginning, however, and we should not expect that children at this age will experience full maturation of higher level association areas. When they are learning to read, for example, often they are so busy focusing on decoding the words themselves that they miss the overall meaning of the passage. It is quite likely that using a computer could have this same effect. To illustrate this, Healy mentions a teacher who decided to use a computer program to assist a student with a developmental lag in reading. The teacher had to revert to conventional reading methods because this student became so lost in clicking buttons and highlighting text that she was not actually reading. Healy spoke with a teacher of nine to eleven year olds from Colorado who believes that children will only benefit from computers as long as a teacher is nearby mentoring them to ensure that they utilize their formal reasoning and analytic skills, lest they fall prey to pure entertainment value. A proven way for children to develop listening skills, story sequence, and comprehension is through having stories read to them. At the same time that they are acquiring auditory skills, they are afforded the pleasure of creating their own images in their minds. Electronic books, as pointed out by Armstrong and Casement, may work in opposition to the acquisition of these skills. There is a distinct possibility that the text will "virtually" disappear in favour of the animation, "bells, and whistles" that electronic books typify. This can create a television effect, imprinting an expectation on the minds of children that reading must entertain in the same way as television entertains. They cite Derrick de Kerckhove, director of the McLuhan Centre for Media Studies at the University of Toronto: before children experience movable text they must first experience fixed text because it allows time for concentration, absorption, and reflection. Another vital aspect of reading or hearing stories that is lost with electronic books is the opportunity to stop and ask questions. Armstrong and Casement also discovered through discussions with many teachers that children become so focused on the visuals that they pay little attention to the text. Many people believe that hypermedia, by its very nature, could not possibly fragment thought processes, but it requires a well disciplined mind to cope with all the distractions that accompany hypermedia (for example, pictures, movies, or links to unrelated topics). Because it requires less mental effort to look at pictures (a predominantly right-hemispheric activity) than to read (integration of both hemispheres), metacognition will become ever more important to our children. Healy admits that her work does not disprove claims that computers can accelerate or enhance cognitive development, but by the same token, she could not find conclusive evidence to prove that there is a recognizable improvement in cognition because of computers. She came in contact with a professor from the Columbia University Teachers College who was involved in a project to restructure inner city schools. One of these schools, located in Harlem, was a computer mini-school-within-a-school. Funding for computers came from a multimillion dollar government grant, and indications of success were everywhere: students were motivated, focused, and technologically literate. One of the lead technology teachers attributes this success not to the technology itself but rather to good teaching and smaller class sizes. (Healy speculates that these gains would occur even in the absence of computers.) The professor leading the program, Dr. Robbie McClintock, believes that computers are more important as a cultural influence than as a learning tool and that we need to be careful not to meld intellectual pursuits with entertainment. He states: "We're beginning to pit knowledge institutions such as schools and libraries against broadcast and entertainment institutions..." (p. 103). Armstrong and Casement highlight research by Schmitt (1996), who discovered no superior achievement amongst computer users as compared with the rest of the student population. He did find, however, that achievement amongst poorer school districts did rise. He speculates that this could be due to factors beyond the technology itself – i.e., that it is more psychological than academic because they may get a moral boost from having new equipment and a corollary sense of greater control over their lives. Armstrong and Casement, citing research from a school in Minnesota that became part of a two-year study to determine the benefits of computers, found that the fourth to sixth graders who participated were found to have slightly lower scores in math, reading, and language arts than students who received a more "traditional" approach (i.e., print resources). They mention another study that found similar results from another Midwest school. The Information Age has apparently brought with it a tendency to equate intelligence with mathematical and scientific ability. Although unrelated, computer expertise has been elevated to the same unjustified status. One of the results of this kind of thinking has been a reduction in arts programs. A good example of this is cited in Armstrong and Casement in their discussion about funding cutbacks in Ontario. The outcry against this reduction has been relatively minimal because parents have been conditioned into believing that their children will somehow fail to succeed or miss out on something vital (i.e., computers). New curriculums (developed by people somewhat distanced from the front lines of the classroom) seem to closely follow this line of thinking: children who have good logical-mathematical abilities achieve academic success, even though this is an unreliable indicator of an individual's potential. Armstrong and Casement quote Douglas Sloan, professor of history and education at Teachers College, Columbia University: "...cognition involves a rationality much deeper and more capacious than technical reason..." (p. 169). They provide further support for this through the work of Gardner (1993) and his theory about multiple intelligences. He speaks of the limited scope associated with the information-processing model which pays very little (if any) attention to the importance of the interplay amongst all senses and focuses primarily on the intellect as if it were a separate entity. **Software Challenges** Appropriate software remains one of the greatest obstacles to effective computer usage in schools, even amongst technology's greatest advocates. The issue of appropriate software also arises within the works of several other researchers (Roberts and Samuels, 1993; Nicol and Butler, 1996; Kang and Dennis, 1995; Wiebe and Martin, 1994; Snider, 1992). In each of these cases, appropriate software is cited as a major barrier to the effective use of computer technology. As pointed out by Forcheri and Molfino (1995), perhaps this would be a problem of lesser magnitude if software was designed by education specialists rather than computer scientists (who typically have very little, if any, knowledge of child development). Educational software designer Tom Snyder is of the opinion that "The most interactive experience you ever had with your computer is less interactive than the most meaningless experience you ever had with your cat." (In Healy, p. 39). His primary objection lies in the lack of educational expertise behind most software programs. Even software that has received a good deal of positive attention has met with some negative feedback. Oppenheimer (1997) tells of a study concerning a popular reading program (Reader Rabbit). It was found that students using this program experienced a fifty per cent drop in creativity. After seven months with Reader Rabbit, students were no longer able to answer open ended questions and they demonstrated a reduced ability to brainstorm with fluency and originality. Although there are many studies that claim superior results from CAI (computer assisted learning), there are also many that are ambiguous -- i.e., computer technology either fails to measure up to traditional teaching and learning (Schumacker, Young, and Bembry, 1995; Roberts and Samuels, 1993) or there are no substantial changes in achievement (Sieglinde, 1993; Fletcher and Gravatt, 1995; Langone, Willis, Maione, Clees, and Koorland, 1994; Rice, 1994). Hativa (1991) studied the cognitive, sociological, and affective effects of integrated learning systems. At the time of the study, no ILS could offer a good curriculum, a good management system, or easy integration with the school curriculum. One of the problems lies in the way ILS are administered. Students work alone with little or no opportunity for interaction with others. What is known about learning is that children are more likely to retain and understand new information if they have had an opportunity to discuss or explain it to others (teachers and peers included). From Healy, we learn that in order for ILS to make any difference at all, students must spend at least thirty minutes a day working on the computer. If ILS were used in four subject areas, students could conceivably spend almost their entire day in front of a computer. Armstrong and Casement mention that hundreds of studies have shown that LOGO, once touted as holding great promise for cognitive development does not help children think in a logical or sequential way; nor does it assist them in developing problem solving skills. They present an excerpt of commands that children had to follow when using LOGO to produce a square on the screen: "FORWARD 100, RIGHT 90, FORWARD 100, RIGHT 90, FORWARD 100, RIGHT 90, FORWARD 100" (p. 48) A study of grade two students in Ontario revealed that working with LOGO required constant teacher supervision because the children were simply not developmentally ready to understand or undertake this kind of work independently. They quickly became confused, frustrated, and bored if left on their own to wend their way through the numerous commands. Rather than assisting in the development of cognitive skills, LOGO seems to lead to the understanding of a programming language. Because software presents only images, not real experiences, children will not perceive the world in the same way. Their development is contingent on real experiences, perhaps supplemented by computerized images, not the other way around. Healy (1991) speaks of the importance of children developing inner speech, that "voice" inside that helps us understand, analyze, and evaluate the world around us. In order for that to happen, children need exposure to intellectual activities that allow them time to mull things over. She believes that lack of such opportunities leads to difficulties in problem solving, abstract reasoning, and writing coherently. Computer technology may run counter to this necessity. This is reflected in Armstrong and Casement, who quote from an Ontario Ministry of Education report (1991) regarding literacy: "Without a mastery of speech, we would lack the internal voice that automatically accompanies us as we read and that we instinctively use to clarify meaning and interpret nuances of tone. Similarly, writing involves an internal dialogue that helps us to sort out our ideas as we set them down on the page." (p. 85). **The Case for Word Processors** In discussing the benefits of a word processor to the writing process, Armstrong and Casement pose the question, "Does the use of word processing help young students in the development of their writing skills and strategies, as many educators seem to believe?" (p. 105). What they found in their exploration of this question was that overall, word processing itself could not solely account for an improvement in writing (other than presentation) and that individual preferences must be taken into account (i.e., some students find computers helpful while others find them frustrating because they become bogged down in keyboarding mastery and software commands, thus detracting from their actual writing). A grade five student from a school in the northeastern United States reported that the only thing he liked about the computer was the games. His personal preference was to perform written assignments by hand even though his teacher insisted that he use a computer (p. 109). These difficulties are not confined to young students. A study involving grade eight students revealed an average typing speed of eight words per minute. It was found in another study that grade ten students tended to produce more writing by hand than by word processor. Armstrong and Casement also include comments by a grade seven teacher from Calgary who now requests that his students write their first draft with pencil and paper. His observation was that they became so distracted by formatting features that they produced very little in the way of actual writing. In addition, he has observed that students can become so distracted by formatting features, headlines, and fonts that the quality of work suffers immensely. Furthermore, he has noticed that because their finished product looks professional, they tend to think that the content is superior. Commenting on the tedium of recopying written work by hand, Armstrong and Casement present the viewpoint of a researcher who observed grade seven through nine students. He assures us that the process of handwriting draft to final copy is not a waste of time but rather a time to focus on each word they have written, thus forcing them to carefully reflect even further on their work. Another advantage to paper and pencil is that writers can review several pages at the same time and even the entire project during the drafting and editing phases. **Teacher Resistance** There is a good deal of pressure for teachers to find ways to incorporate computer technology into their classrooms. However, unless the software is relevant, there will inevitably be some resistance. In a study conducted to determine the role of computers in secondary social studies classrooms, Ross (1991) discovered that this form of technology was rarely used. Drawing from the work of Ehman and Gienn (1987), he cites several reasons for lack of computer usage: computers are not widely available for teacher use, there is a lack of high quality software, research is ambivalent concerning computer assisted learning, professional development was lacking, and there was no software that was integrated with the curriculum. Wild’s (1996) premise that there is a general reluctance on the part of teachers to utilize computer technology creates a somewhat different approach to his work. In articulating questions such as, “What difference has computer technology made to the quality of everyday classrooms? What are the benefits to teaching and to learning...?” (p. 135), he exposes reasons for technology refusal. In summary: promises that have been made but not necessarily kept over the past twenty years. **Social Implications** Along with financial and health costs, excessive computer use can also have serious social costs. There are many people who promote the computer as a highly effective way to socialize with others. If one has relatives on the other side of the world, email does prove to be a much more cost effective, expedient method of communicating. It is also a great tool for sending jokes to friends. However, unquestioning acceptance of computers as a primary means to connect with friends (or to meet new friends) comes with a few serious problems. Computer Addiction Even people who are quite "normal" can be susceptible to abnormal relationships with machines. Computer addiction is now recognized as a bona fide psychological affliction. Healy, in discussion with University of Pittsburgh instructor Kimberly Young, tells us that computers offer the same kinds of escape from reality as drugs or alcohol and it has been found that normal brain function can be altered in the same way as addictive substances by stimulating deep pleasure centres. Many people have a favourite television show that they go out of their way to adjust their lives to accommodate, and the characters may become "real" as people follow their televisions antics (soap operas), but the relationship (if you can call it that) is one-sided. We may know the characters but they do not know us. At some level, we all recognize this. On-line relationships, on the other hand, differ quite significantly. Communication is two-way, which is not bad in itself, but there is a deep sense of unreality about it. Hiding behind a screen permits facades of a different sort: "Assuming new identities, individuals may begin to believe they are loved and cared for in their new 'selves'." (p. 197-198). As pointed out by Healy, the APA Monitor (Sept. 1995) reminds us that as human beings, human relationships are vital to us, and people without close personal relationships are at serious risk for personal and social problems. Miall (1995) observes that most communication on the Internet is "mindless babble..." (p. 10). The Internet, however, is still increasing its membership, meaning that more and more people are indeed going one on one with their computers, while purporting to be participating in a social activity. Stoll's (1995) response to Internet relationships is that we lose a vital connection to our own communities and to the world around us. He describes computers as luring us into the "...warm comfort of their false reality." (p. 136). **Social Distancing** One of Healy’s school visits brought her to a school that had launched a seniors program. It was called the “Elders-Kids Connection” and involved children teaching computer skills to local seniors. The benefits of such relationships are immense, both for the children and for the seniors. Seniors, typically cloistered away from society, in nursing homes, seniors’ residences, or in their own homes, are able to have meaningful contact with young people, they feel worthwhile, and children receive the one on one attention (and patience) from a group of people who possess invaluable life experience. In one case, students discover that their guest is faster than they are at solving mental math problems. Students who are typically withdrawn or unmanageable are transformed while seniors who rarely smile in their retirement homes are suddenly alive and happy. Healy pointed out to the district technology coordinator that computers would not necessarily have to be the catalyst for such relationships. The response she received was, "...but the elders love it...and the kids are so proud of being teachers. Now, if they get the home up on email, we're going to start writing each other." (p. 170). A university professor tells Healy with great pride of his computer whiz kid. At age six, he has memorized the names of rocks and is working on geologic periods. Having used the computer since the age of two, he is more interested in using it than playing with toys or other children (he plays with his older brother "some"). His parents state that they are trying to limit his computer time to three hours a day, but they don't want to discourage him from learning. In following this little boy to school the next day, Healy learns from his teacher that he is having severe social problems, he displays no interest in reading, and copes with emotional stress as would a much younger child. He was observed by a school psychologist, who placed his emotional development about age three. The psychologist reports that he doesn't fit the profile of a child with social-emotional learning disability (SELD) but agrees that too much computer time may be interfering with his normal development. Healy's discussions with two pediatric neurologists further confirm that computers are making children more vulnerable to social isolation and even autistic-like tendencies. They are seeing more and more children whose language development, social skills, and imagination are severely delayed and perhaps permanently destroyed. According to Daniel Goleman (in Armstrong and Casement), we cannot treat emotional intelligence as something less than academic achievement because of the long term impact of a healthy self concept. Early experiences, either positive or negative, play themselves out throughout the rest of our lives, and extraneous conditions are equally as important to development as genetic encoding. The development of this form of intelligence comes not from computer programs, television, or video games. It comes from direct human contact. Based on Goleman's findings, Healy writes that between the mid 1970s and late 1980s, emotional and social well being have diminished in children. Goleman has furthermore found that IQ contributes about twenty per cent to personal success. The other eighty per cent is due to social-emotional intelligence. Coincidentally, attention deficit disorder, low motivation, and poor work habits have risen. These characteristics are based in the emotional centre of the brain and are dependent on contact with other people. One of the most frequently diagnosed disorders amongst children and adolescents is conduct disorder (violence against other people). Sanders (1994) speaks of young murderers who appear to be completely lacking in feeling. Because such violence has been constantly on the rise over the past twenty years, one cannot help but wonder what role early experiences (both personal and social) have played in this phenomenon. It is known that body, mind, and emotions are integrally linked. When children are playing computer games or watching television programs, they tend to become emotionally involved, but this involvement is of a different sort from that experienced in real situations. When the human brain is in a state of emotional arousal, certain neurochemicals and hormones are produced that prepare our physical bodies for self protection (fight or flight syndrome). In real life, we can respond to these frightening experiences through a physical response. When it is a violent television program, video game, or computer game, these hormones and chemicals build up as toxins. Repeated experiences, warns Healy, can create long term negative effects, including blood pressure problems and a brain with a need for "extreme" (i.e., high excitement) experiences. This has some rather frightening ramifications in light of children's current electronic habits. **Programmed Therapy** Taking us even further away from healthy human relationships, computers can now replace counselors and become our personal confidantes. As reported in the *Calgary Herald* (April 6, 1997), a team of psychologists from the University of Glasgow have developed a three step program that they claim can help people diagnose their own mental health problems and subsequently come up with their own treatment. Given the nature of mental illnesses and the complexities in dealing with the many convolutions in the thought processes of the human mind, a computer program may seem a preposterous tool because of the obvious necessity of a trained, objective outsider to consider the more subtle messages that we tend to convey unawares (body language, innuendo, voice intonation, etc.). This impersonal aspect seems not to bother Jim White, leader of the team of researchers. He accepts and capitalizes on this as part of the evolving human condition: people are increasingly turning to self help. We must admit that Jim White is absolutely correct or we would not be seeing all the self help books that have appeared on bookstore shelves over the past thirty years. Reading books, it can be argued, is equally as impersonal as using a computer program, but, does not imply the same sort of relationship as something that seemingly responds to one's feelings and needs. The computer may rather take this one step further and give the impression of the stereotypical caring robot that is popularized in science fiction movies. A machine that assumes human characteristics may not be the most healthy way of dealing with psychological problems. If people are reluctant to discuss their problems with real, live, counselors, another suggestion might be to turn to pets as therapists. At least a dog is a living thing, and will respond to human conversation with apparent interest or perhaps the occasional quizzical look (even though a canine glance implying, "Are you nuts?" may serve to worsen one's fragile psychological state). Judging from his appearance as a guest columnist for the Calgary Herald, Sheldon Walker's career as a family therapist has evidently not yet been supplanted by computer software (or even pets). He discusses the changing roles of family members and reminds us of our humanness and our need to connect with one another. Conscious awareness of this fact is extremely important in light of our busy society (which is reflected in our busy personal lives). Quality time spent with family is more important now than ever before. Along with all of the outside activities and distractions, we have the "inside" distractions: electronic media. He points out that computers are primarily a solitary activity. "There is one keyboard, one mouse, and often only one chair..." (p. A18). While he admits that he is personally in favour of technology, he reminds us that electronics may replace precious family time together. He emphasizes that parents should convey the message to their children that the primary purpose of computers is to provide educational support (January 11, 1996). **Cultural Implications** When Birkerts (1994) wrote a review of our changing culture, he discussed the impact of moving away from literature and increasingly towards information processing technologies. Included in his discussion was a commentary about the effect on the self: in relying more and more upon television, computers, and electronic communications, we may be sacrificing our aptitudes for reading and meditative introspection. This viewpoint broadens the concept of social distance to distance away from the inner self. We are sending the message to children that technology holds the key to a variety of doors, all of which are related in one way or another (obscure in some cases and quite obvious in others) to success in the adult world. Healy's summation of this is as follows: "We seem to care more about how fast our children can learn than how deeply they can feel. We are increasingly dependent on abstract expert systems rather than on other human beings, and it is tempting to abdicate to technology the job of society's elders to initiate the next generation." (p. 199). Not yet having developed the quality of discernment, whatever we, as parents and teachers promote to children as worthwhile, they will embrace with alacrity. To a great degree, children rely on the judgment of significant adults to learn what is or is not worthwhile. Armstrong and Casement's summation goes one step further: "A generation of children have become the unwitting participants in what can only be described as a huge social experiment." (p. 2). Sanders (1994) commentary on this topic is, "When a teacher asks a child to sit in front of a computer in grade school, that teacher has invoked the authority of a battery of screens -- TV, movie, and video. Unwittingly, the teacher has plugged the child solidly into the anti-literate world of media." (p. 129). A great contradiction has begun to emerge. Employers generally want employees who are motivated, creative, cooperative, and communicative. The more emphasis that is placed on pure technological skills, however, the more we detract from the "human" qualities that are so necessary in an organization. Healy came across a recent Swiss survey that rated desirable employee characteristics. Academic achievement in school was near the bottom of the list and computer expertise was not even mentioned. What was valued most were the "humanistic" aspects. Healy repeats Thoreau's poignant statement that if we aren't careful, we could all become "tools of our tools." (Healy, p. 30). Preparation for the "Real World" Healy describes the work of neuropsychologist Sid Segalowitz (1997), who regards computer applications as worthwhile for students, but at the same time cautions against overuse. Frontal lobe maturation peaks during adolescence and there is a risk of overdeveloping cognitive function at the expense of social behaviour. A clear, steady focus must remain on enhancing emotional development and healthy human relationships. At the same time, Healy reminds us that adolescents are in the stages of cognitive development that allow for more in depth discussions. She suggests that the inclusion of broader issues into class (for example, how different media affect thought and societal development; the impact of technology on politics and vice versa, etc.) would make students feel more directly involved in the world around them and less bored, passive, and impatient. Because they are capable of reflecting more deeply on their own thinking, they need to develop the ability to ask good questions, because now more than ever young people need to become philosophers as well as scientists and technicians. If we are to accept that schooling should be in some way preparing young people for adult life, then what we should see in schools is the reverse to what often exists. Reinforcing learning that requires correct answers can hardly be adequate training for the society in which we live. If we are to survive as a species on this endangered planet, what will be necessary is originality and creativity. When this is placed within the context of arts programs, we can readily see the importance of music, drama, and art. After an elementary school production of "The Wizard of Oz", Armstrong and Casement learned from the principal that the cast included: a student with a serious learning disability, a painfully shy girl, and a boy with behavioural problems. The principal's comment was, "The real magic of the arts is that they give kids the internal discipline they need to manage their lives." (p. 177). **Health Concerns** Health concerns associated with computer use have long been recognized within the business world but this has not yet filtered down into classrooms in a significant way. Amongst frequent computer users, the most common complaints are: back problems, injury to muscles, ligaments, joints, nerves, and tendons (also known as carpal tunnel syndrome or repetitive strain injury), and eyestrain. Other concerns that have arisen as a result of exposure to computers include: computer-related radiation, toxic chemical emissions, and seizures. Although not directly linked to computer use, the rise in obesity across North America has been connected to our tendency to replace physical exercise with electronic gadgets (which also includes video games and television). Repetitive Strain Injury and Postural Complaints It may be easy to dismiss some of these concerns as unique to the adult world, where daily exposure for many hours at a time have led to some of these ailments, but according to Occupational Health and Safety Consultant Richard Pilkington (in Armstrong and Casement), younger and younger children (most of whom are boys, some as young as seven years old) are appearing at his practice for chiropractic treatment. The fact that they are children may lead some people to believe that because they are children, these problems will be easy to “fix”; however, it is quite the opposite. Once habits are formed, they are very difficult to undo. Pilkington reports that childhood problems are linked directly to poor posture at the computer. Another type of injury – this one is unique to children – is called “Sega Thumb”. (Saga is a popular video game.) Pilkington explains that Sega thumb comes from overexerting the thumb, which is a weak joint to begin with, when using a joystick or playing high speed video games. Along with repetitive strain injury (RSI) in children, headaches are quite common. These can be caused not only from eyestrain, but from sitting at a desk that is too high on a chair too low, which places strain on the neck and shoulders. He recommends aerobic activity, stretching exercises, and general rest breaks. Anyone using a computer from two to four hours per day is at risk. From Armstrong and Casement, we learn that [ironically] a newspaper reporter who was investigating the effects of RSI had muscle and nerve damage to his hands so severe that he could not actually type the article. himself. This also occurred in another journalist (at the age of thirty-one). Both men were given voice activated computers and both are now suffering Forman Repetitive Strain Injury to their vocal cords. **Visual Problems** In her discussions with eye experts, Healy reports that computer use is already causing visual difficulties in children. The "flicker" created by the continuous refreshing of the phosphor coating on video display terminals is highly stressful to both the visual system and to the brain. Coupled with that is the tendency to stare without blinking for long periods of time when using computers. This creates visual irritation, which is only exacerbated by improper lighting (too little, too much, or the wrong kind). The quality of lighting can furthermore affect our physical well being, according to the founder of the Environmental Health and Light Research Institute in Florida. Armstrong and Casement find these concerns supported by the work of a number of researchers. When light from fluorescent lights or even natural light hits the computer screen, eyestrain can occur as a result of having to open our eyes wider (due to the larger size of a computer screen), thereby exposing a larger portion of the eye's surface. Along with flicker, the typical visual complaints include: resolution and colour problems (e.g., lack of sharpness in font type, font size, difficulty distinguishing colour and character), jitter (individual characters on computer screens can oscillate, leading to poor legibility and eyestrain), and glare (which can come not only from artificial or natural light, but also from such things as white clothing worn by the person at the keyboard, and from white paper situated nearby the computer screen). And if anyone should doubt that VDT-related eyestrain actually exists, there is a VDT eye clinic at the University of California (Berkeley). The chief of this clinic, Dr. James Sheedy reports that 80 percent of patients are there for eyestrain and 50 percent for blurred vision and headaches (Armstrong and Casement). Armstrong and Casement also cite the work of Warren Hathaway (1994), which is of particular interest at the school level. In his comparison of students exposed to spectrum lighting (that which most approaches natural sunlight) and those exposed to high energy sodium-vapor lights (as are typically found in schools), he discovered something beyond the old familiar eyestrain: full spectrum lighting was linked to such things as higher scholastic achievement, quicker physical growth, better health and attendance, and even fewer cavities when compared with children who were exposed to the unnatural light provided by the more common lighting systems. Palmer (1993) discusses the negative impact of video display terminals on the human visual system. Along with blurred vision, irritation, and pain, deterioration of retinal function has been linked to prolonged exposure to computers. Her research has centred around adult subjects, but she points out computer technologies could have a similar negative impact on children, and we should be especially cautious with this group because their visual systems are still developing. Oppenheimer (1997) writes about a particular school in Napa, California that embraced technology so fully that a computer was placed on every desk. A few months later, students complained of headaches, eye sore, and wrist pain. **Radiation and Chemical Emissions** Perhaps the most disturbing risk (and the one that is most likely to garner the greatest attention if it were widely known) is the computer-related radiation even though "...at one time [it] was believed that these low frequency radiations were incapable of causing harm to human beings...it has now been shown that people are far more sensitive to any radiation than previously believed, and that causal relationships are beginning to emerge." (Mander, 1992, p. 56). From Healy, we learn that children are five to ten times more vulnerable to radiation than adults. Radiation-related illnesses are most likely to become manifest in bones, the central nervous system, and the thyroid gland. It is from the backs and sides of computers that most of the dangerous emissions are produced. It is suggested that children should not be within four feet of another computer. (This may present some rather serious complications in schools where students may use computers in lab settings or even in classrooms that house more than one computer in close proximity to one another.) The U.S. EPA has identified a number of chemicals given off by computers, and these emissions are particularly high when equipment is new. An entire lab with new computers or even a few amongst the old would hold even greater risk of danger to children. The typically poor ventilation found in schools compounds this problem even further. In interviewing Edward Lowan, an Ontario environmental consultant, Armstrong and Casement found that traces of three hundred different chemicals are found in the vapours given off by new computers. Electromagnetic radiation (backs and sides of computers) is a concern not only because of possible damage to tissue, but it has also been found that EMF (electromagnetic fields) may also interrupt the production of melatonin, which apparently acts as a cancer inhibitor by neutralizing free radicals (Reiter, 1994, in Armstrong and Casement). This is not even to mention that the production of melatonin is crucial to our sleep patterns. Seasonal Affective Disorder has been linked to shortened sunlight during fall and winter in North America, along with chronic depression when sleep patterns are severely disturbed. Toxic emissions may also affect the skin in adverse ways: rashes, dryness, and itching have been reported by computer users. In addition to this, ear, nose, and throat irritation have been linked to computer usage. Seizures In 1997, it was reported that a Japanese television cartoon caused seizures in children when red lights flashed for five seconds. Armstrong and Casement imply that, although seizures may be rare, they have evidently occurred with enough frequency to warrant a medical term: video-game-related seizures (called VGRS by neurologists). Obesity Also related to deterioration of children's health is the use of computers or video games in place of good old-fashioned outdoor play. Some rather startling changes have occurred in the way of childhood obesity and general physical fitness over the past thirty years. This trend is closely tied to more television, video games, and junk food. (Junk food for the mind; junk food for the body.) Not only does lack of physical play affect physical development, it also has a negative impact on mental functioning. Physical exercise increases blood flow to the brain, which translates into increased oxygen to cells, which in turn improves concentration and retention. There is evidence that children who have difficulty at a sensory level coincidentally experience difficulty in reading comprehension and mathematical problem solving (Healy). Cholesterol levels may also rise due to lack of activity (Gold, 1998 in Armstrong and Casement). The temptation to downplay the physical side effects of our electronic world becomes quite ridiculous when placed within the above context. Financial Costs The Calgary Board of Education has one thing in common with most other school boards: funding constraints. Declining enrollments have forced school closures because it is not cost effective to maintain neighbourhood schools with less than a certain number of students, staff reductions have resulted in a variety of areas (from consultants to caretakers) due to budget constraints, basic classroom essentials (i.e., books) are now being purchased (in part) through parent fundraisers, etc. However, despite these difficulties (which are of really rather large proportion), there seems to be plenty of money to inject into computer technology, at least initially. When the government pledged fifty million dollars some years ago, it sounded like a lot of money. This process, however, has been rife with problems. The Calgary Board of Education is not the only school board that was forced to set some priorities in terms of where their available cash was to be spent. Healy, in her numerous visits to classrooms in the United States, mentions a commonplace occurrence: upon commenting on a well equipped computer lab in a particular school, she discovered that it was at one time the music room. As one of the "extras", music had been eliminated and the space quickly became designated as a computer lab (which is a commentary of another sort). Computers in this particular school had also been placed in regular classrooms but most of them were not being used by students because of lack of money to train teachers. Healy reports that she heard this same lament dozens of times in many of the schools she visited. Teachers are often expected to spend their own time and money on learning how to use, teach, and integrate computers into existing curriculums. It is usually emphasized that computers are not meant to be an add-on in the classroom. That they are an add-on to teacher "spare time" seems irrelevant. The justification for such huge expenditures (often at the expense of other programs, equipment, and print resources) is that children must become familiarized with technology because it pervades their very daily existence and their success in life may be threatened if they are not exposed to it. In cases where governments, computer companies or corporations donate equipment or funds to schools, the initial costs may be avoided, but the rate and cost of this rapidly changing technology becomes out of the financial reach of most school boards. A prime example of this is given by Healy in her mention of the Open Charter School in Los Angeles. This school was chosen by Apple Computers to run a five-year project for research and development into computer applications for schools. Money was made available for virtually everything: intensive teacher training, extra staff, 190 computers (one computer for every two students), ongoing technical support, and upgrades throughout the life of the project. At the end of the project, Apple pulled out and the school was left to finance any upgrades and maintenance costs. Needless to say, the school couldn't afford to continue to use the technology because their regular school budget (including any parent fundraising projects) simply could not sustain the high costs associated with remaining current within this form of technology. In Alberta, funds were made available for basic wiring (a somewhat complicated process in older schools that had asbestos insulation), but this does not take into account the many costly upgrades that have been required as new innovations appear (e.g., multimedia and updated "educational" programs). Of particular expense is the installation and licensing of multi-use software. A single program can run upwards of $300, an expense that very quickly adds up when one takes into account needs and ability levels of different age groups. This problem is certainly not unique to the Calgary Board of Education, or even provincial boards throughout Alberta. In addition to these problems, there are a number of other costs associated with computer installation and upkeep, including: technical support (i.e., onsite technology teachers who are readily available), theft, insurance, and teacher education. In order to keep up with technology costs, money must be taken from the budget for textbooks or library materials. The Calgary Board of Education is a case in point: library budgets have been cut while technology budgets have increased. These cuts have impacted not only print resources but the number of teacher-librarians in the system. Estimates of the real costs of computers in schools are mind boggling, to say the least. These costs range from $375,000 per state school in California (for a low end system) to $27 million for eleven schools in one school district (obviously a very high end system). Neither of these estimates take into consideration teacher training (in Armstrong and Casement). Others (Jensen, 1993; Roszak, 1994; Miall, 1995; Stoll, 1995; Barbules and Callister, 1996; to name a few) also remind us of the great initial and ongoing expense in maintaining computer technologies. In particular, Stoll drives home a fundamental point when discussing computers in the classroom. We are faced with many difficulties within our education system, including budget constraints, and we are often in a scramble to obtain essential classroom materials (not to mention teacher salaries...). Wiring classrooms is an additional cost that technology brings to schools, and these, too, run the risk of obsolescence. (This does not even take into account the heavy financial burden involved in purchasing software and connecting it to the Internet.) Healy suggests -- quite rightly -- that money should be set aside for brain research in order that we may optimize all the teaching tools we have at our disposal. By and large, teachers are quite receptive to introducing computer technology into their programs, but most have not been adequately trained. Ross (1991) found that the primary concern teachers had about using computer technology was lack of knowledge or training. This remained a major stumbling block over the years (e.g., Callister and Dunne, 1992; Van Dusen, Lani, and Worthen, 1995; Eraut, 1995; Mahmood, Mo, and Hirt, 1995; Pence, 1995-1996; Nicol and Butler, 1996). And so it remains today. In most cases, teacher training (professional development) within the realm of computers, falls largely into the hands of teachers themselves. For some, it is worth the extra time and money, and for others, it is not. The Information Age in a Global Context As the pressure is on for businesses to compete in the global marketplace, fears of lagging behind gradually filter down to schools and the education that children receive. Without asking too many questions, it has apparently been accepted that knowledge has become one of our greatest buffers to such dreaded results. In other words, knowledge has become one of our greatest resources. Frequent international comparisons of academic achievement may appear to be innocuous, but this carries with it another message: the race is on, not only to compete effectively in the international marketplace, but also to achieve the highest grade point averages in the world. In-School Concerns That the drive is on to be number one is reflected in the way our Calgary public school report cards are structured. Initially, on the scale of one to five, one was low and five was high. As an administrator divulged to me, due to a quiet but effective backlash from parents, the order was reversed because of the competitive connotation associated with number "one". As school boards, curriculum developers, and educators respond to demands for greater expertise (particularly within mathematics and science), we may be participating in the creation of a super race (where have we heard that before?), while at the same time participating in the erosion of some vital human qualities. Admittedly, this is occurring independent of computer technology; however, the appearance of the Internet in schools has brought with it the impression that ready access to the latest information is immediately available and desirable. Armstrong and Casement quote an elementary school principal as saying, "Knowledge is doubling every fifteen months, and we want our students to be exposed to the most up-to-date information. Classrooms are information-poor; the computer makes them information-rich." (p. 122). His choice of words (information-rich/information-poor) seems to further underscore the monetary value (which, in this society, translates into relative success or failure) that is linked to computer technology. Whether it is valid to assume that the computer holds the promise of wealth in any form does not even arise. Armstrong and Casement point out that it is not quite so straightforward. Not only do children need to possess a certain amount of Internet savvy (i.e., knowledge about how to manipulate their way through the plethora of "information" available on the wild, wild, web), but they need to possess a certain level of maturity and perhaps sophistication when it comes to possible diversions (i.e., self discipline in bypassing all the possible temptations along the way to become sidetracked to games, advertising, or even topics of educational content removed from their original search). This can be summarized in one word: confusion (particularly for younger children). Before books are completely discounted as excellent sources of information, we need to remember that even if the text is too difficult to comprehend, children can always use picture cues or easily pick up another book from the library stacks to catch a quick glimpse of what is inside a book and whether the information is relevant or not. Computers are not always user-friendly. That the aforementioned principal's commentary is extremely oversimplistic becomes even more blatant when you actually place a child in front of a computer with the expectation that an independent worker suddenly will emerge. Rather, as Armstrong and Casement note, in place of an independent learner, a child in need of a constant tutor materializes. This is not only true of young learners. Researching any topic at any age requires a great deal of time and patience. At a time when we need school librarians more than ever, their time is either being cut back or eliminated completely. We have five hundred fewer teacher librarians now in Alberta than we did in 1983 (p. 131; Armstrong and Casement). In some cases, librarians become school technology teachers whether they want to or not. Similarly, in some schools, the arts (music, physical education, art, etc.) are being displaced by technology. Internet Concerns Armstrong and Casement, with the help of a librarian, undertook a search on the Long March of the Chinese Communists in 1934-35. One hour later they had found nothing relevant on the Internet. They found that they were not alone. In 1996, a British journalist wrote an article about the effectiveness of the Internet. His topic was the Corn Laws of 1846. When he entered his title, he discovered that the first "hit" was called "Breast implant firm halts compensation claim" (in reference to lawsuits against Dow Corning about breast implants). After a number of other dead ends, he gave up and returned the next day, more successfully. Armstrong and Casement point out two distinct problems. One is that topics must be very specific and the other is that different search engines may organize data bases in completely different ways. When you place children in this situation, it is quite clear that they require a good deal of training and constant supervision. This of course assumes that the teacher has had some very specific training and has spent a good deal of time getting to know different search engines and bookmarking any relevant sites. In one case, a Toronto teacher and her grade five class sat staring at a Nazi bulletin board. On another search for chickadees, one of her students found herself connected to a sex chat room. Another difficulty in trusting the information highway as the road to enlightenment lies in the lies that lie ahead. Information posted on the Internet is not monitored or refereed (not that even experts in any particular field can always be trusted to be unbiased), so information presented as fact may indeed be pure fabrication. It is difficult enough for most adults to sort out fact from fiction (particularly when we are subliminally bombarded by flashy advertising), but for children it is even harder. These same thoughts are echoed in the work of Miall (1995). He makes us aware that not only should we be wary of what we read on the Internet, but of what we will have to go through to find it. "Information searches on the information highway are a joke. They are slow, use incomprehensible search techniques, and are ludicrously incomplete. Most communication is mindless babble..." (p. 10). Along with spending a great deal of time and energy, roaming around on the information highway requires spending a great deal of money, also. He makes us furthermore aware that these costs are often hidden as overhead in universities and government departments (meaning that everyone, whether they have direct personal access or not, winds up paying for it). Robertson (1998), echoing the words of Neil Postman (1992) emphasizes our unquestioning acceptance of technology to the point of worship. She includes in her work an excerpt from a 1997 Council of Ministers of Education conference: "Information technologies help build students' self esteem, empowering and enabling them, as well as building their confidence and feelings of success...[and furthermore that students]...will assume more responsibility for their learning, using inquiry, collaborative, technological and problem solving skills, all of which are required in the global marketplace." (p. 92) This belief that technology will solve most of our problems and give us high status in the global marketplace implies that unemployment is the fault of individuals themselves -- if they cannot embrace the new technologies, they cannot survive. Robertson discusses an article in which the father of a toddler is looking far into his child's future success as he proudly states that computers "...really prepare them." (p. 99). Promises that we can reach out to others globally (via the Internet) with the purpose of developing a better understanding of other cultures may well be nothing more than another false reality created by technological promises. The isolated and very contrived circumstances under which many of these communications occur are unlikely to produce the kinds of changes that we really require on this planet -- i.e., those of an ecological nature rather than a mechanistic one. **Final Comment** Regardless of what our orientation is toward computer use in schools (i.e., positive or negative), this technology is part of the architecture of the classroom and it is being incorporated into the mainstream curriculum. What is required now is a look at the impact of computer use in the classroom through the eyes of teachers engaged in the process. This is addressed in the next chapter within the empirical portion of this study. The relatively recent intervention (some would say intrusion) of the Internet in schools has opened numerous other options and possibilities for the use of computers in education. Although this may be said to be yet in the beginning stages, there are a growing number of users who have discovered many practical applications of the Internet in everyday classroom learning situations that are unconventional. These discoveries come not only from teachers but from students as well, which serves to provide students with a greater sense of empowerment and some control (albeit limited) over their own programs. Along with the potential to enhance, expand, or extend any kind of research topic, the Internet presents other possibilities in developing deeper understandings of global connections. Such usage broadens the potential of the Internet as a tool for heightening awareness of culture, society, differences, similarities, social justice issues, democracy, critical inquiry, etc., in other words, a transformative dimension. The possibility of adding a transformative dimension to classrooms through the use of Global Learning Networks was explored in a Calgary public school situated in an upper-middle class area. Conversations also included discussions about current computer use. Background Because Global Learning Networks are beyond the "regular" curriculum, there was not a great deal of prior knowledge concerning such projects, and a presentation at a staff meeting as well as a detailed written description was directed to those who expressed interest in participating in this study. Of the approximately thirty teaching staff employed within this particular school, eleven volunteered to participate in this study. Three of these eleven people held administrative positions, either part or full time. All participants varied in their knowledge of computer technology from novice to expert (based on a self rating scale), with relatively equal representation of staff members from Division I (grades one to three), Division II (grades four to six), and specialist teachers (music, physical education, technology, and resource). It is significant to note that several others (five or six) expressed interest in the project but with their teaching load and the variety of other school commitments, felt that they could not become involved in one more activity. Several others stated that they did not come forward because they felt too inexperienced with computers and did not feel that their contribution would be valuable. Even though they were assured that everyone's input, at whatever level of global and technological understanding was worthwhile, they did not feel comfortable participating in the study. Throughout this chapter, participants' views are presented in quotations in small print. At the time of this research, this school was equipped with over sixty "state of the art" MacIntosh computers, a number of colour printers, a scanner, LCD panel, two digital cameras, and a variety of other technological aids. The computers were distributed equally amongst teachers (most of whom work in a team situation with one other partner, which allows for sharing four computers amongst two classrooms), and the remainder (at the time of this study) are located in a computer lab. Computers are networked throughout the school via one server (intranet system), enabling students and teachers to access personal folders from anywhere in the building. Classroom teachers have all been using computers in essentially the same way -- drawing, graphing, word processing, spreadsheets, hyperstudio, Internet, etc. Support is available from the technology expert on staff, along with several other key individuals who have developed their computer literacy to a higher degree than others. There is a range of skill levels amongst teachers and [at least in theory, anyway] there is a spirit of open communication and sharing. The variance in age and ability amongst students has been beneficial to both the individuals who assume a leadership role and those who receive the support and guidance of their peers. On occasion, students have been able to assist teachers in resolving some of the minor technological glitches (that occur with some frequency), but this occurs rarely and only under the supervision of teacher. This was seen as enhancing self esteem in children: "I'm not as current as I should be [due to the time it takes]...I don't have the knowledge. But that can become a positive because I can call on other kids. I want them to be able to go with their strengths." It should be noted that several computer-competent students have derived some enjoyment from sabotaging the school intranet system. Some teachers have optimistically predicted that as more and more students master higher level computer skills and develop greater responsibility when using computers, they will have the ability to sort out their own [minor] computer problems. Statistics surrounding the number of students who have home computers also arose several times through the course of our discussions. In one classroom (with approximately thirty students), there was only one individual who did not have a home computer. Others who offered this information reported no more than three or four students who have not been exposed to computers in their homes. The interviews were based on a set of questions (see Appendix I); however, in most cases the interview format evolved into conversations tailored more to suit the knowledge and experience of the individuals involved. This allowed for freer expression and flow of ideas and opinions, without sacrificing the internal integrity of the study. Invariably, the questions were covered in one form or another: responses either spontaneously emerged during part of another discussion, or were given as direct answers. As expected, several themes emerged from the topics discussed. Of significance to this study are the following themes or topics: Professional Development, Power and Politics, Child Development, Values, and a final commentary on the likelihood of inserting Global Learning Networks into this particular school. Other patterns can be identified as well, but for the sake of brevity, these are integrated into the aforementioned major areas. These themes are prefaced with a section about attitudes (both positive and negative) towards computers in general ("Technology Learning Outcomes: Computers as Mandate"). **Technology Learning Outcomes: Computers as Mandate** The administrative team in this particular school and the surrounding community are in full support of computer technology. Theirs was a collaborative plan developed virtually from the first few days that the school opened, and it has since been followed through as a priority. "Where our [money] came from is [the principal took it] from capital budget and said, 'I believe in technology'. She had control of things at start up. Some was matched by parents. It was not a pilot school but rather believed in technology...this is the wave of the future. When the school numbers increased (i.e., students), the decision was made to get a technology expert...before that, it was articulated by the community..." By and large, most teachers who were interviewed regard computers in the classroom as very positive. In some cases, future employability was presented as an issue. Because we are living in the so-called Information Age, the belief is typically held that it is the responsibility of the education system to keep abreast of current trends in order that children graduating will be on a level playing field. Even though it is impossible to predict what skills will be needed by the time elementary children graduate grade twelve, it was felt by *many* (not all respondents held this belief) that it is our duty to provide the same tools in our schools as are being used within our society. One person in particular expressed deep concerns about our general lack of scientific expertise in Canada. This comment was followed by an observation that we have moved so much towards the humanities that we have no balance within our education system. The associated concern here involves lagging behind in the global marketplace. (The Information and Communication Technology document hopes to address such concerns through its emphasis on the forces that shape, monitor, and protect the school: governance, leadership, policies and procedures, regulations, legislation, and community standards.) Several participants mentioned the benefits to children with special needs. Effective computer programs are also very helpful to the teachers who must design individual programs for students with mild to moderate learning difficulties. Interactive CD roms have also been very helpful to students who are not speaking English as their first language. If schools are equipped with high end computers, students not only listen to and learn to read English, they can also compose sentences, record their own voices, play them back, and monitor their own pronunciation, grammar, etc. Several respondents believe that for students struggling with fine motor development (which affects pencil grip, legibility, etc.), computers can provide an opportunity to produce legible work at a faster rate (once keyboarding skills become firmly in place). They are thus less frustrated in daily activities than when they are required to use pencil, paper, and eraser. "One student...[who] struggled with fine motor [coordination]...would never have used paper and pencil, but the computer was very exciting for him...he had an interest in learning. His fine motor was developing at the same time, but [it was] not as frustrating." It is very evident that most of the teachers within this particular staff are, on the whole, quite convinced that computer technology in classrooms is highly beneficial to students and teachers. However, it is widely acknowledged that we have some distance to travel and mysteries to unravel before full competence is achieved. Through the course of our conversations, several points of contention were articulated. The most common complaint involves the mechanical breakdowns that have become commonplace. The frustration associated with this is compounded by the fact that the computers are networked throughout the school: when the system "crashes", none of the computers work. While a teacher was specifically hired to take on the role of technology expert, there is an inherent downside to this position: as a teacher in the school, it is not always possible to drop everything at any given moment to provide immediate computer assistance, whether the problems be major or minor. Also frustrating is the time that it takes to wade through the abundance of material on the Internet in order to perform searches or complete research projects. Establishing computer bookmarks (which allows for students to go directly to preselected sites) makes for easier access, but this becomes a time issue also, as teachers are required to do the legwork (which often requires a great deal of time and patience), adding to an already heavy workload. Identified as another weakness is the varying levels of computer literacy amongst the staff. A major thrust of the school technology committee (which is chaired by the technology teacher and comprised of teachers who meet during lunch on a weekly basis) is to learn computer techniques and then share new information with other staff members. This has enabled a small number of staff members to develop more advanced skills than others, but at issue — again — is time. These same people are functioning with full teaching loads in a very busy school with a number of other ongoing committees (e.g., diversity, professional development, assessment, division meetings, staff meetings, team meetings, partner meetings, parent meetings, etc.), making virtually everyone on this staff feel stretched to the limits. When asked about the philosophical objections that have arisen to the mass introduction of computers into schools, everyone acknowledged that these may have some validity, but not to the extent that they would personally disconnect their classroom computers or carry out any further investigations. One individual, however, did have some solid opinions about computer technology in elementary school: "...It makes no sense that so much money has been spent on technology in elementary school. Why was [subject school] designated the flagship techno school? Technology alienates people from the real world...computers are a fast food mentality..." Because these objections cover a relatively broad span (from health concerns to financial burdens), it was somewhat difficult to deal with specific issues. It was acknowledged by all respondents that they had not been exposed to, or had not sought out (likely due to time constraints and pressures to complete the curriculum) a great deal of information that would detail the issues in this area. One individual expressed concern about the way that computers are used in the home. Strict parental guidance at home was believed to be a reasonable solution to many of the problems associated with such abuses of computer technology as visiting inappropriate web sites. Attached to this statement was the suggestion that parents need to undergo a sort of training program in order that their children regard computers not as toys or electronic games, but rather as tools to serve educational purposes. The difficulty here lies in who decides and defines "educational". Software developers could claim that vital problem solving skills are being promoted in programs that have children wend their way through a variety of mazes while killing off the "bad guys". While this may be a form of problem solving, who can say whether it is desirable, given the emphasis on annihilating one's perceived foes. Another individual felt that children visiting inappropriate web sites at school absolutely should not occur, as it is the duty of the teacher to continually monitor students, and to take the time to bookmark relevant sites before the children even sit down at the computers. Every respondent, at some point throughout the interview, stated that computers are tools. One individual was particularly adamant that we must be careful not to become too dependent on machines to solve all of our problems, and that furthermore, having to use books for research purposes still holds a great deal of merit in aiding development of patience and perseverance. Research tends to be a slow process, and children – particularly in the early grades – will be done a disservice if an attitude of "quick fix" in school is promoted or permitted. The belief here is that certain skills (including spelling and penmanship) are necessary prior to reliance on spell checks or keyboards. To this respondent, computers cannot eliminate these kinds of problems, but rather exacerbate them. Several others held the same opinion. Computer addiction was also raised. Although schools logistically could never provide the opportunity for such extensive use, it is an issue that needs to be addressed as more and more computers are appearing in homes. This perhaps fits with the aforementioned parent education that may be necessary. Whose responsibility it is to take on this type of education program becomes a moot point. Administrators and teachers are cautioned to be ever watchful of blatant values education. Another downside to computer technology is the high cost of maintenance and upgrading required to remain current. The fifty million dollars that was allocated by the provincial government (although it is a vast sum of money) will never be enough to keep up with such rapidly changing technology. "Everything's getting bigger and bigger, we load on these huge programs, then we have to add on more memory, then everything gets bigger again..." If this is left to individual schools or school boards, the problem of inequity becomes even more pronounced. Wealthier school boards and school districts may be able to keep abreast of changing technologies, but others simply will not have the financial resources to maintain such a costly undertaking. In the case of individual schools, the relative wealth can be based to some extent on the success or failure of parental input (which includes not only fundraising but other resources such as donating time and equipment). In a school such as the one involved in this study, this may not be as big a concern as for others for several reasons. As stated by one individual who participated in the planning of the technology program, it was built into the plan that hardware and software very quickly becomes obsolete and certain allowances must be made if technology is to continue to be a priority in this community. Another reason is that parents are very involved in the school, both physically (in the classrooms and on committees) and financially (as evidenced in the various school fundraisers that occur). Regarding the role of government in funding issues, one person fully believes that it was handled as well as it could be given the circumstances surrounding technology, funding, and society. Being highly in favour of computers in schools, no matter what the cost, the comments were: "[As for] money issues...Parkdale Centre was renovated. Money is being wasted really badly. People are so brainwashed, they blame Ralph [Klein]...there are so many superintendents...now [CBE] directors still get between $120,000 to $150,000 a year....Money is more misued in this system than any other system. Sharing CBE money with the rest of the province is not an issue for me. The other things that go on are disgusting...they redid [renovated] downtown [central office] then hid the fact...they're not cutting top jobs, they're cutting secretaries." This suggestion that the blame for lack of financial resources needs to be shifted away from the Klein government and placed squarely on the shoulders of the Calgary Board of Education is based on the belief that there is plenty of money within our own board but that it is not being allocated wisely, particularly since offices are being remodeled and administrative conferences are still heavily subsidized. Four of the eleven respondents were quite firm in their belief that the benefits far outweighed the shortcomings (some would perhaps even deny that the term "shortcomings" would even apply), and the remainder were less certain and became fairly quiet and noncommittal when this topic arose. No one, of course, has all of the answers, and as teachers, we can only try to make the best of what is now a mandated part of the curriculum. There was widespread acknowledgement that we really cannot predict what skills will be specifically needed by the time the students in this school graduate from grade twelve, but at the same time, there was general consensus that computers, as tools, are an important part of schooling. The preceding highlights some of the benefits and some of the drawbacks according to the perceptions of the participants in this study. Everyone is resigned to the fact that computers are in the classroom to stay, so theirs is a search to find the most appropriate ways to utilize them. This does not mean, however, that there is a general feeling of preparedness, consideration of their perceptions, or sense of control. The following section addresses these issues in depth. Professional Development Although the school in this study is very fortunate to have a technology teacher on staff who has been available to provide ongoing support as well as learning opportunities outside school hours, there are still a number of concerns (in part philosophical and in part practical) amongst teachers in this area. From a philosophical standpoint, a number of teachers strongly object to taking even more of their personal time to learn and explore the potentialities of computer technology when there are already so many demands to learn new curriculums, attend meetings outside school time, become involved in various committees, etc. At the same time, there had not been any raises nor had the five per cent pay reduction from several years ago been reinstated. From a practical standpoint, concerns arose from staff members being at different levels of ability, making some technology presentations are too simple while others are too advanced. Also, there are no free workshops available through the Board that would address these issues, and visiting other classrooms and schools requires substitute teachers, for whom there is no special funding. In answer to some of these concerns, all members of the administrative team, along with a few other teachers, pointed out that the directive from the Calgary Board of Education is that all professional development is expected to be carried out during personal time, with no allowances for substitutes or (in most cases) any monetary compensation for course fees. This is being promoted as one of the realities of the teaching profession. To quote one of the administrators who participated in this study: "We don't have the workshops or support available from the system, and if they're going to do it, they have to do it on their own time. There is not the money from the [Calgary Board of Education] or the government...[and they] can't do a whole lot. It does fall on the shoulders of the individual teachers...[but it has] some positives. [Professional development] costs money...[and when that] becomes an issue, people feel that if they want it bad enough, they are going to do it..." Another member of the administrative team held the same perspective: "The clear message from [Chief Superintendent] Donna [Michaels] is that anything we do in the future will be on our own time..." The other side to this coin was quite succinctly summed up by one of the teachers on staff: "We are swamped right from August to June...Every time we turn around someone wants to add something. [It comes to a] point you're going to have to take something away or the school day will become ten hours. They seem to want all of this to be part of the school, but there's been no thought put into resources or training staff. What I want is subs [substitute teachers] provided...after teaching a full day, [we] run off to the far end of the city for a technology meeting to find out what [we] want or need to know isn't what [we'll] get. It wasn't until one on one [with my computer] that I got something out of it, but I don't think that's viable. I used the Internet over the summer. Not every teacher has a computer at home, and with the lousy pay we get, who can afford to buy one?" In the past, specialists were available, either to host after-school workshops or to provide advice and support over the telephone. Workshops still required driving across the city outside of school time, but there was perhaps less resentment because of the relatively easy access to specific professional development topics and it usually didn't cost teachers any money. Elimination of these positions leaves teachers in a quandary: they need answers, but there is no one to ask. In terms of the difficulties associated with learning about computers and incorporating them into curricular areas, some suggestions were put forth to streamline or improve the present situation. One was to have greater involvement and support from the administration team, providing more leadership in daily computer usage, and direct contact and assistance with students. Providing substitute teachers or administrative support came from more than half the respondents. The suggestion to deploy the administration team in classrooms in order that teachers could have relief time to came from a number of staff members. (This suggestion did not arise during conversations with members of the administrative team.) Not one teacher was very well disposed to the thought of having to pay for ongoing workshops as well as attend these courses on their own time. Another suggestion (this was mentioned in one form or another by several staff members) was that we are trying to do too much within the area of professional development. At the present time, we have three school-wide priorities: technology, assessment, and diversity (i.e., diversity in learning abilities as opposed to cultural diversity). Because each of these topics is so broad (and, it must be noted, important), it is difficult to master any one in particular. "We need more [professional development] time. With all our different committees, we are becoming fragmented. There's not enough time to become a master of anything. We're all becoming a 'Jack of all trades'. One suggestion to make PD [professional development] better is to simply focus on PD on technology or make it individualized so people choose their own PD and rely on their [own] professional judgement...if they don't think they need to upgrade in any way, they don't...but just focus on one thing...[like] technology." This is a very sensitive topic and due to the position of the administration, one that cannot easily be resolved. As one of the administrators pointed out, "PD needs to be tied in to the needs of the school. [Teachers] can pursue individual PD if they choose to do things for their own growth on the side, that's their choice, but when you're tied to a particular community, these are the needs, the priorities, they need to be tied all together. When teachers make moves to other schools, what should be foremost in mind is what are the priorities in other schools...Teachers need to look around when they come to a new school..." Having professional development in school (i.e., during one of the weekly staff meetings) that would allow for sharing of technological applications amongst teachers was put forth. However, one of the difficulties, as expressed by one of the teachers who has taken the time to become quite familiar with technology, was that some teachers may be expected to assume a leadership role, which would increase their workload, but they may not necessarily receive anything usable in return. At the same time, the expectation to participate in other school committees, etc. is becoming overwhelming: "I'm a little confused. We are getting all this pressure from administration to learn technology, but I'm asked what I'm doing in other areas. I don't have the time. Leadership...okay, I'll donate my time and energy, but are they going to do something to update my skills too? The technology committee could train others for a day, but they could use that day to learn more themselves." One of the greatest difficulties for teachers lies in the lack of simple instruction manuals (in print form) that would permit individual learning (assuming that teachers feel they have the time for it) or easy access to simple solutions when problems arise. It may seem that teaching and learning keyboarding is straightforward and simple, but it is not. There are many "tricks" that must be introduced and explained (for example, to perform certain functions on Macintosh computers, one must know the right keys to push; to unlock the system, teachers are given a top secret "hot key" that cannot be written down anywhere lest a student stumble upon it and with malice aforethought, wickedly wreak havoc with the school's entire computer system). Professional development is a major area of concern for teachers, but no one seems to want to listen. One of the administrators commented that: "Stress is extremely high in the CBE. EAP [Employee Assistance Program] is very busy...people are talking about suicide kinds of things. Temporary staff are also experiencing greater stress, but everyone wants a job so badly that they just do it [whatever is asked of them]..." Connected with stress is the sense that no one has much control over what happens in their own learning environments. It is true that teachers make the final decisions regarding teaching methods, but with mandated curriculums, they have very little input, if any, regarding content. This is not, however, the only area in which these staff members staff expressed a sense of powerlessness. **Power and Politics** A recurring theme throughout the interviews was the role of individual power (or perhaps it would be more appropriately stated as powerlessness). This feeling was equally expressed in varying degrees and forms, either overtly or coverly, by both teachers and administrators. In the words of one administrator: "...I feel very powerless lots of times...caught in a Catch 22. Teachers in this province need to be vocal; parents need to start speaking up...the government cut services to health and welfare too [along with funding cuts to education]...these are all human, it's not big business. It's the human things that affect the quality of life and [it's]...the quality of being that is being undervalued...in the CBE I get discouraged. I go to system meetings, talk and interact with superintendents...get a picture of the CBE in relation to government and the rest of Alberta...talk about feeling powerless...outside the school I detect a sense of hopelessness. I find it comforting to come back here [to school]...here I can make a difference." Intricately interwoven into power issues are political issues. For all our talk in recent years of empowerment, it appears that we are a fair distance away from actually achieving such lofty ideals. In one sense, teachers can wield a great deal of power (in the sense of power over children and, in some cases, colleagues), and have often been held responsible for destroying self esteem. In another sense, there is a great deal of powerlessness associated with being in the teaching profession as well. We must concede that inherent in this profession is a hierarchy: teachers must follow the rules as laid out by the administration, although collaborative decision making is promoted as diminishing the power structure to some degree; schools must work in accordance with school board policies (including funding) as well as Alberta Education (perhaps one of the greatest power brokers, as it is this body who decides what, when, if, where, why, and how curriculums are developed); who in turn has some commitment to the government (and of course, to the business community). At the school level, there was a fear or concern expressed amongst many of the teachers that they could only ask the "right" kinds of questions -- i.e., those that would not in any way challenge the status quo. This includes questions regarding technology, professional development, committees, etc. This kind of thinking not only creates a good deal of resentment and behind-closed-doors chatter, it is in direct contradiction to what we purport to be instilling in the minds of the children with whom we come in contact. It is difficult to state with certainty why we tend to be so acquiescent when it comes to such issues, but several staff members speculated that since teaching is a nurturing profession, we are generally reluctant to create potential conflict situations. "The unfortunate psyche of teachers is not to revolt...we are nurturers...always trying to make someone feel good and you can't make someone feel good if you're being a jerk. But unless you're a jerk, no one listens to you. That's where business get what they want." Another speculation is that a sort of public humiliation would occur, and yet another is that as soon as we step inside the doors of any particular school in the system, we need to be prepared to embrace the existing philosophy of the school, democratic decision making notwithstanding. As noted by a couple of staff members: "...when your absence [from social functions] is noted by the administrative staff, why would we feel free to express opinions?" "[There is a] fear in asking questions. I don't feel the freedom to voice my opinion. I have seen someone voice a politically incorrect opinion get publically squashed...[so I try to] stay on the straight and narrow. [It's not] just this school, it's lots of other places. Even with Alberta Education telling us, we're becoming puppets...we have lost all that free thinking..." If we feel silenced within our own small teaching and learning community, it becomes more understandable that we would not feel comfortable to question the "higher powers". This brings us back to professional development. Many of the participants in the study felt that not enough adequate research or planning went into introducing computers into schools (one objection in particular involves equity issues), and certainly that there was no allowance for proper teacher training. Several people strongly felt that politics (i.e., government influences) played a major role in the fifty million dollar funding allocation, since no election is ever really very far away. "I think the government tells especially Albertans what they want to hear, especially if you belong to the Chamber of Commerce in this province...If the Chamber of Commerce valued hanging from your toe, we would have a course in hanging from your toe whether it made sense or not, and someone higher up in the Board or Alberta Ed would be able to justify it. The CBE would jump on the bandwagon...Many things in education are ridiculous, and I don't think a lot of thought is put into it...I'm shocked that such educated people don't stop and say, 'Hey, that didn't work there, why is it going to work over here?'...I like technology, but I'm not getting carried away with it..." However, whatever the ultimate reason that we are now struggling with the intricacies and difficulties in the teaching profession, in general there is an air of just trying to keep oneself afloat in the teaching pool. Whether we perceive correctly or incorrectly that we have no (or limited) choice or power in deciding what happens in our classrooms, the fact remains that computer technology is here, and a technology plan is now a mandated part of the program of studies. Whether we feel empowered or disempowered, we will be faced with decisions not so much of a philosophical nature, but rather of practical application. **Child Development and Values Education** Although child development is embedded within every aspect of teaching and learning, it is included as a separate topic because of the criticism that was expressed concerning Global Learning Networks and their inherent philosophy. One respondent was deeply troubled by introducing social justice issues, poverty, discrimination, etc. into daily classroom activities because of the level of intellectual processing that might be required. It was felt that at the elementary level, children are so very much involved in still developing a sense of self that to subject them to deeper issues would only cause guilt and confusion. It was stated that they cannot be blamed for where they are born, that poverty and homelessness would frighten them rather than sensitize them, and that such projects would involve teaching a certain value system. "We have two world paradigms colliding with each other...my question is, whose politics do we use?...to define social justice according to whom, is my first question...where I'm opposed to [taking a stand regarding social injustices] is I don't believe that children should be the vehicle for that. They're struggling already just trying to figure out about friendships and their own self esteem...childhood needs to be gentle sometimes...the stress that everybody feels is passed on to kids. They grow up wondering, 'Will there be jobs?'...I ask myself, 'How can I help this child know more about themselves?'...then they can ask what they can do for the world. But if they haven't got that inside, even in their thirties or forties they're not going to do anything...Junior high [would be] appropriate [to introduce Global Learning Networks]; grade six possibly..." Others were equally aware of the potential difficulties that may be encountered when introducing sensitive issues to children, but no one agreed that elementary school was by any means too early. "The world is not sweet and kind all the time...Even little kids can understand [that]. You have to look at the individual cases, but kids can understand that some people do not have food to eat or homes to live in...we have to be careful, but I think we underestimate their knowledge. They know a lot...Awareness is coming more to the forefront. We want kids to stop and think that some children don't have milk to drink, 'So what if I didn't get that toy [for my birthday]?' Isn't that what we want the world to be, more caring? Isn't it our job to impart the facts?" Many staff members are also parents, and one in particular described a family incident surrounding a certain brand of running shoes that are associated with human rights abuses in another country. "These issues have already come up in our own family...Every shoe manufacturer has the same thing. They can get very cheap labour in other countries. It's an uphill battle and only good for as long as the pressure stays on. These kids are not too young. I think we coddle our kids. We don't let them be real. Yeah, they're fortunate, but not all families in the world are that way. Kids can help...[perhaps] participate in fund raising for other countries for basal readers..." The point to be made here is that most children (unless they have led very sheltered lives indeed) have already been exposed to a variety of these issues and are fully aware of more than they are credited with knowing. Television programming carries with it many subtle messages, and even the cartoons children watch may reflect any number of societal biases, forms of discrimination, etc. One of the respondents was very enthusiastic about becoming involved in a Global Learning Project, emphasizing the importance of properly introducing and discussing beforehand the issues to be explored. In answer to the concern that there is no sense of reality (at the elementary level), the general response was that it was very real indeed for children to be connecting with other children and sharing their experiences. Global Learning Networks - A Final Commentary The entire purpose of the study was to determine whether or not the teaching and administrative staff were amenable to introducing Global Learning Networks into their school year, either as an ongoing activity, or as a discrete project. Because there has not been much information (if any) about these from Alberta Education or the Calgary Board of Education, for many staff members (both participants and non participants in this study), this was the first they had ever heard of Global Learning Networks. It is no simple task to condense such a massive area into a workable explanation, and this has undoubtedly merely scratched the surface. All but one of the respondents felt that using the internet for such a purpose would benefit students at any age. As is the case with anything we undertake in classrooms, emotional and intellectual development must be taken into consideration when selecting projects, along with curriculum compatibility and the all important *time factor*. The first two conditions are relatively simple to control for, but the final condition, that of time, was a huge stumbling block. It has already been emphasized that those of us within the teaching profession are feeling overwhelmed and overloaded, and the background research required to carry off anything else may be too much to hope for within this school. Even if one teacher were to carry out the legwork and provide an outline for others to follow, there is a great risk (as articulated by one individual) that it would become too prescriptive, and that pre packaged learning would not fit within the philosophy of the school. Although it may be discouraging for the time being, one must bear in mind that the infusion of technology into curriculums is in its beginning stages, and that as classroom teachers become increasingly more competent in learning and delivering yet another new area, there may yet be much hope for the inclusion of Global Learning Networks either as special projects or as part of the existing science, social studies, language arts, (etc.) curriculum. That Global Learning Networks are workable in Alberta classrooms is noted in the following chapter. A summary of the many successful and worthwhile school projects presented by Cummins and Sayers (1995) is followed with several stories of school projects in Alberta that illustrate how computers can be used to advance transformative pedagogy in an integrated sense. CHAPTER FOUR GLOBAL LEARNING NETWORKS: THE SUCCESS STORIES From the vast number of successful online research activities, Cummins and Sayers (1995) present eight examples of Global Learning Network experiences derived from two central agencies: I*EARN (the International Education and Resource Network) and Orillas. Both of these networks emphasize intercultural learning. I*EARN's focus is using telecommunications to raise and confront issues such as prejudice and intolerance worldwide. Its aim is to enhance students' development of critical thinking and empowerment while generating new knowledge in a collaborative setting. The basic objectives of Orillas are very similar, providing a multilingual mechanism for focusing on activities that promote social change, enhance understanding of other cultures, value community traditions (for example, folklore and oral history), and advocate for antiracist education (among others). Both organizations strive to include schools within and beyond North America to ensure cultural diversity. What follows is a brief overview of the case studies that Cummins and Sayers present in their work, demonstrating the ways in which global learning can succeed through the use of computer technology. Scenario One: Bosnian Refugee Camp 1993 This project began when Narcis Vives, an educator from Barcelona, heard of a pacifist organization's plan to visit a Bosnian refugee camp (which had been set up by the Croatian military) in Savudria, Croatia. Vives offered to donate a computer and modem to the camp's makeshift school. Once the electronic mail system was underway, Sanel Cekik, one of the 500 children in the camp, transmitted a message through the I*EARN network to schools all over the world describing the horrors that he and his family were subjected to during the war. Translations into other languages were made possible by posting a general request throughout a variety of Global Learning Networks. A student from New York was able to interpret Sanel's message into English and from there it was translated into many other languages around the world. Responses to Sanel's message came from children all over the world, expressing their care, concern, and shock. Further communication came to a halt, however, when the Croatian Army director intervened and quite literally pulled the plug on the computer. Not about to be defeated, Vives posted an electronic mail message telling of the unfortunate circumstances, but it came with an alternative plan: students were asked to send drawings, letters, videotapes, etc., to Barcelona, where they could subsequently be taken to the camp. The team of relief workers then organized an "International Day of Solidarity," which included a videophone conference call between the children at Veli Joze and students from two cities in New York and eight schools in Barcelona. To achieve this, a video camera was placed in a telephone receiver, enabling photographs to be taken along with simple written text during phone conversations. A somewhat unexpected offshoot occurred as a result of the "Day of Solidarity": a clown who had been invited to entertain the children organized "Clowns Without Borders", a volunteer troupe of performers who raise money and perform for children in Bosnian refugee camps. This concept was picked up by entertainers in other parts of the world and benefit concerts have been held in Pakistan, Chiapas, Brazil, and countries in North Africa. As a final point, Cummins and Sayers state: "This communication, while uncomfortable, has generated a resolve among many students to understand the roots of discrimination and to confront its manifestations in their own societies." (p. 30) An outcome such as this underscores the validity in participation in such activities. **Scenario Two: Second Language Acquisition - Maine and Quebec City,** This initiative by the Orillas global learning network involved an interschool cultural exchange in 1992 between two upper elementary classrooms; one in Maine, and one in Quebec. Both groups were new to Global Learning Networks. The students in Maine came from a francophone background and were interested in reclaiming the heritage of their parents (many of whom still spoke French at home). The native language of the students in Quebec City was French. They began their communication by exchanging cultural packages (containing such things as individual and class photographs, samples of soil and flowers, etc.) but without knowing the exact identities or location of their counterparts. The opportunity to practice a second language was beneficial to both groups: the English speaking students in Maine communicated as much as they were able in French; and the French speaking students in Quebec City used their English. By the end of the year, they had published a bilingual magazine, which the students themselves wrote, critiqued, revised, and translated. In this case, geography worked in their collective favour: living only three hours apart allowed for a unique culminating project: they met face to face. The students from Maine traveled to Quebec City and were perhaps somewhat surprised to learn that their French speaking counterparts were deaf, essentially making French their second language and sign language their first. The potential for learning another language through computer technology is self evident. An added benefit was that the natural delays in communication allowed students time to reflect on and revise their writing before responding. A project such as this not only dovetails very nicely into curriculum (i.e., it is not one of the dreaded "add-ons"), it also authenticates second language acquisition in several ways: students communicated with students, making it possible to fully comprehend the frustrations that accompany learning a foreign language; and this was a collaborative effort, involving not only the teacher as "expert", but the entire student body as teachers in their own right. Scenario Three: Confronting Prejudice -- New York and San Francisco This project was made possible through Orillas in 1993 and was based on the work of Gordon Allport, a well known social psychologist whose work involved studying the effects of cooperative learning activities on reducing prejudice. The aim of the two inner city schools, Sheridan in San Francisco and P.S. 19 in Brooklyn, was to deepen understanding of cultural differences and attempt to erode prejudice. Interethnic tensions between African American and Latino students at Sheridan Elementary (a newly desegregated inner school) were exacerbated by the physical constraints imposed upon them by a playground filled with portable classrooms, as well as having very little in school contact since the Latino students were in bilingual classes and the African Americans were in "regular" or English development classes. Along with restricting their freedom of movement, it severely limited their chances of meeting on a social level even at recess. The global learning partnership these third and fourth grade Latino students formed with their Brooklyn counterparts was most appropriate. The third grade students in New York were primarily of African Caribbean descent, thus forming a unique kind of bridge for all the participants. The partnership began with a video exchange involving indigenous celebrations from Sheridan, and traditional games from P.S. 19. These video exchanges proved to be very successful, with each group learning much from the other. Initially, the partnership included only the two classes; however, a conversation in the staff room at Sheridan generated interest in a grade two teacher (herself of African American descent as well as most of her students), ultimately resulting in their participation in the video. "Miracles" obviously did not happen, but this was a beginning, especially since an intra school association arose (and continued in the proceeding years), providing an important (perhaps necessary) opportunity to develop two way understanding within the school. This association not only brought students face to face, it united them further through sharing family stories. The end of this project became the beginning of something else. **Scenario Four: San Diego, Denver, and Puerto Rico: Parental Element** Cummins and Sayers remind us that the flexibility of an organization allows for moving beyond its own boundaries – there are no rules that prevent parents from becoming active participants along with their children. At the beginning of the 1989-1990 school year, families from Sherman School in San Diego had been invited to join an after school computer course (originally designed as a literacy course). Their ethnic backgrounds were varied, and included Latino, African American, Cambodian, and European American. The purpose of the course was to provide instruction in computer use (word processing and telecommunications), then to form partnership with others through an Orillas project using electronic mail. The final goal was to publish a local newsletter that would be distributed in the community. As a kick-off, families in San Diego created a book comprised of cultural contributions from the various ethnic groups and sent copies to their partners in Colorado and Puerto Rico. As their proficiency in basic computer use increased, they began communicating with their long distance partners via electronic mail. Spanish speaking participants were elevated to a new status, becoming teachers as well as learners. Children from backgrounds other than Spanish (for example, Keovong, whose family fled from Cambodia) became interpreters for their parents, recording their stories, then working with others to translate their messages into English or Spanish. The continuing support and enthusiasm of the Sherman School group helped to bring about several publications that were distributed within the community. These included; information concerning parent teacher conferences (purpose, procedures, etc.), an international collection of articles that focused on technology and self esteem, and a community newsletter featuring people from their own community. In addition to learning about computers, the families involved "...recognized the many resources and talents of their local and distant colleagues...[and] realized how much they could accomplish if they pooled their efforts in new ways...[and furthermore]...gained greater confidence." (p. 50). **Scenario Five: Explorations in Folklore: Integrating Proverbs** In an Orillas-initiated project, classes were invited to collect and analyze proverbs. It was set up as a contest, including such requests as: best drawing illustrating one of the proverbs; best original fable (writing an original story based on a proverb); greatest number of animal proverbs by a single class; and best original essay concerning a proverb that projects a biased or outdated viewpoint (for example, one which places women solidly in the home). Several schools in the United States and Puerto Rico expressed interest and participated in this project. Each selected their area of interest, used their own unique approach, then shared their findings through the electronic mail system. It was a valuable learning experience not only because it became a relevant and educationally sound language learning unit, but it also provided students with the opportunity to share cultural and linguistic knowledge. At the same time, they were engaged in critically analyzing content while recognizing and valuing their own and each others' oral histories. Cummins and Sayers make one final point about using Global Learning Networks to explore folklore: "...they also can bring cross cultural awareness and language skills to students who otherwise would never have access to people from distant lands and other world views." (p. 57). Understanding that we are all connected through certain universal themes can help to deemphasize and demystify the psychological barriers created by that which is foreign to us. Scenario Six: The Holocaust: Confronting Prejudice and Intolerance The Holocaust/Genocide Project is an ongoing on-line opportunity for students all over the world to use computer technology to access information about the Holocaust. It started in 1992 through I*EARN in Israel and has involved schools across the globe: Argentina, Australia, the former Soviet Union, Poland, and the United States. Information is accessed through the Internet, providing databases and computer archives from around the world. In addition to this, an on-line forum exists wherein members can share ideas, partake in discussions, give and receive advice about research topics and methods, etc. Although most of this occurs on-line, in one case involving an Australian teacher, a teacher from California, a student from Seattle, five students and their teacher from New York, and thirty-five Israeli students, information printed on a computer screen was brought to life. This group traveled to Poland, where they visited death camps and Holocaust memorials, then went on to Israel where they stayed with host families. This, of course, deepened their understandings of prejudice and intolerance. In addition to on-line access, the Holocaust/Genocide Project produces an annual publication called "An End To Intolerance" which publishes commentaries from students and teachers as they complete their respective projects, expressing their feelings, findings, and hopes. In addition to this, new Holocaust teaching units are developed by students and teachers based on research conducted over the Internet. These lesson plans are then made available through Global Learning Networks. In the words of Reema Sanghvi, a grade eleven student, "While working on this telecom project, I have learned many things...children who are educated to respect other cultures, races, and religions generally grow into tolerant adults who raise tolerant children." (P.61). This poignant comment by a young person further impresses upon us that these kinds of learning experiences are not isolated but rather have a real life, long term effect. **Scenario Seven: Safe Drinking Water: Nicaragua** This project was prompted by a visit to Nicaragua taken by I*EARN's Boston coordinator, Dell Salza in 1991. She went on this trip as a participant in Habitat for Humanity (an international organization whose major purpose is to provide housing for people living in "Third World" conditions), and as she witnessed the lengthy daily trips that women and children had to make just for their water supply (which, in many cases, was contaminated and many people were suffering from cholera), she was moved to transmit a call for help through I*EARN. She had learned that a contribution of $100 to $250 could provide the materials necessary to build a sanitary well. This became known as the "Rope Pump Project" and involved very simple materials: concrete, wood, a single wheel, and a length of rope with metal cups knotted at regular intervals. The response for contributions was widespread, coming from the United States and as far away as China and Spain. The impact was equally profound for both contributors and recipients. In one case, a thirteen year old Nicaraguan girl, Leyla del Carmen Campos Luna, sent a letter of gratitude for no longer having to walk four kilometres each day for water. Kristi Kraus, a fifth grade student from Oregon who helped to provide this particular well, expressed her own excitement at having been part of this project: "The jog-a-thon was a big success...[and] raised $2,143.00...that's 21 pumps! ...This is the best thing our school has ever done!" (p. 64). At the time of their writing, Cummins and Sayers reported that $10,000 had been raised for the Nicaraguan "Rope Pump Project." The educational benefits of learning how to construct a rope pump, participating in a fund raising project, and communicating cross culturally are immediately evident. The learning goes beyond a skills-based approach to education. Also included in this experience was the opportunity to discuss "...controversial issues such as comparing the disparity of access to health services in industrialized and developed countries..." (p. 66) -- in other words the opportunity to apply higher level analysis, synthesis, and critical thinking skills to a real life situation. **Scenario Eight:** "The Contemporary": Long Island, New York Beginning as a high school newsletter, "The Contemporary" soon became a popular magazine comprised of student-written and edited articles. From there, it was picked up by I*EARN and has become an international telecommunications forum for students to voice their opinions and learn about issues of global importance. In January of 1994, the Middle Eastern crisis was posted as the topic for discussion. This was, of course, a highly controversial issue, but the intent was to provide a vehicle through which common understandings could be reached as a beginning step toward future peace. Personal testimonials came from both Palestinian and Israeli students, but rather than achieve better understanding or tolerance of one another, tempers flared and fears grew within I*EARN that continuation of these kinds of discussions would serve to alienate schools around the world, thereby threatening the future success of Global Learning Networks. The debate was given a second chance in the May, 1994 issue where further discussion included reactions from students in other parts of the world. The end result has been to continue with *The Contemporary*, while at the same time issuing disclaimers about student publications in order to preserve the credibility I*EARN has established. While the students who participated in discussions surrounding the Middle East crisis discovered that there are no quick fixes to such complex issues, they would invariably develop a broader understanding of how politics can be tightly interwoven into intolerance and hatred. In this case, they may not have felt entirely successful in achieving their original goal, but at least they had the opportunity to express their beliefs and opinions. This was a valuable lesson, not only because of the discourse it afforded, but to the student editors who conceived of this topic, it underscores that there are no quick fixes for such issues. GLOBAL LEARNING: THE ALBERTA CONNECTION The case studies presented by Cummins and Sayers are inspiring, to say the least. They not only present computer technology as a highly desirable avenue through which to explore global issues, but a very necessary one. In Alberta, we have the technology in place, but are we utilizing it within a global context beyond doing computer searches for data or for "fun?" "In Focus," Alberta Education's newsletter, has published several accounts of the way in which the Internet or World Wide Web have been used within classrooms. The March, 1996 issue chronicles several such instances of teachers and students connecting globally. Sherwood Heights Junior High School in Sherwood Park participated in a project with Rothberg High School in Tel Aviv soon after the assassination of Yizhak Rabin, Prime Minister of Israel. Students from Tel Aviv used electronic mail to tell their own perspective of his murder. This was subsequently posted on Sherwood Heights School's home page. Gibbons School in the Sturgeon School Division reports several ongoing projects that provide opportunities to link with their global neighbours. Among these is their "Taming the Tube Project 1996" which involves investigating television-watching habits of ten to twelve year olds. Along with learning about the mechanics of computer use (i.e., data collection; organization of material, etc.), they are required to analyze issues such as how television influences the attitudes and lifestyles of their peers. While this is not closely aligned to critical inquiry in terms of social justice, it is a beginning into awareness of certain Canadian pastimes and their relative value. The next step (i.e., considering such matters as equity, energy consumption, marketing tactics, etc.) may not be that far away. Vital Grandin School in St. Albert was involved in a project with approximately fifteen thousand students from all over the world wherein children share their favourite books and/or authors with one another. There is great potential for understanding cultural diversity and identity within this context. There are undoubtedly many other stories of Alberta classrooms connecting globally in a socially meaningful way, and if teachers and students become more accustomed to the technology and its application to broader issues, ideas will be shared, voices will be heard, and a new era in education can begin. This brings to a close the research basis for this thesis. The final chapter contains a synthesis of the findings as well as conclusions that can be drawn from this undertaking. CHAPTER FIVE DISCUSSION What began as one specific study into the feasibility of introducing Global Learning Networks into an elementary setting evolved into a much broader inquiry into the role of technology in education at large. This entailed both practical and philosophical implications of computers in the classroom. This discussion will begin with a focus on the specific issues regarding computer use in the elementary classroom. This will be followed by a review of the specifics involved in introducing Global Learning Networks into a Calgary classroom. Pedagogy within the Calgary Board of Education will be then be explored followed by a discussion of the broader societal/cultural influences that permeate our thinking at almost an unconscious level with regard to the role of technology in society. Closely connected to the latter, but dealt with separately, is the involvement of business and politics in education. This, in turn, will be followed with a few final remarks. Preamble In order for teachers and educators to make informed, intelligent decisions about how and when to use computers in schools, we must be able to distinguish sound from faulty research. Before we accept as truth the stated benefits or shortcomings of computer technology or its related software, we must first bear in mind that research findings represent someone else's interpretation. As readers, we must pay close attention to what was highlighted and what was ignored within the final discussion and conclusions. An example of dubious interpretation lies in the research of Evans-Andris (1995). In her effort to determine what should be the role of the computer coordinator in an elementary school, she discovered that integration of computing activities into subject areas was low. Amongst her recommendations to rectify this problem was the suggestion that the computer coordinator should not assume classroom responsibilities but rather be on hand to provide advice, resources, etc. for teachers. This was a worthwhile suggestion, but what was neglected in her final analysis was what the teachers were saying: morale was low and the workload was high. It is not that her interpretation was necessarily inaccurate but rather that her final discussion was not complete. Simple solutions are not always workable: foisting more work on overloaded teachers is not likely to improve morale, reduce stress, or motivate them to incorporate computer technology into daily classroom life. There are two other overriding thoughts to entertain while reading the research. The first consideration is who is initiating the research: if a technology design system, with a clearly vested interest in promoting its software is involved, then we must seriously question whether the results are truly unbiased and whether the whole picture is fully brought to light. The second matter involves an evaluation of the degree to which the positive findings outweigh other methods of delivering curriculum. We must never forget that computer technology is rarely a substitute for real experience. **Research Discussion** In this section I will examine some of the various issues surrounding the use of computers in the classroom. This will include a discussion of the perceived benefits and perceived problems associated with computer use. These issues involve the use of computers in general, but they are also implicit in the use of Global Learning Networks. In order not to become entangled in a lengthy discursive debate regarding computers in general, I will confine this section to these key areas: learner attitude (which includes motivation, cooperation, and self esteem), cognition, and health concerns. Other relevant topics such as financial costs, social implications, and the Information Age will be integrated into the final two sections. Attitude Towards Learning Although it is not measurable in any sort of statistical sense, an important determinant of school success is motivation. Despite the lack of quantifiable evidence to prove or disprove that the computer is effective as a motivating tool, this "behind the scenes" factor is included as a separate section because motivation has a powerful influence over teachers' decisions regarding teaching methods. Software developers, fully cognizant of this fact, may use effective advertising techniques rather than sound pedagogical justification to market their products. As a classroom teacher, one of the assertions to which I personally take exception centres around computers as highly motivating tools. Many studies, whether they are specifically investigating motivation or not, include statements about students being excited, highly motivated, or exceptionally well behaved when they are placed in front of a computer (e.g., Capper, 1988; Clark, 1992; Hay, 1997). This is not to say that computers are *not* an effective way to motivate children, but it is not the only way, and it may have little to do with the machine itself (for example, teacher excitement, content, and opportunity to work with peers may heavily influence attitude). There are many other approaches to new learnings that are exciting also, and we must not be swept away by the false belief that computers will suddenly transform attitudes towards learning. Another factor that may influence motivation is cultural norms surrounding computers. High tech solutions are often seen as glamorous by both teachers and students. Many of the studies relating to motivation also include a word about cooperation (e.g., Nastasi and Clements, 1993; Sivin-Kachala, 1995; Peck and Hughes, 1997). Again, it is not that computers do *not* enhance cooperation, but to imply that computers in and of themselves are responsible for improved cooperation is difficult to prove. It is crucial to know whether control groups are included in such studies, what standards are used, and how outcomes are derived. Increased learner self esteem is included as a positive reason to employ computers in the classroom (e.g., Sivin-Kachala, 1995); however, this is another learner effect that may -- and does -- occur independent of computers. If students are successful when using computers, they certainly will have feelings of achievement. On the other hand, I have also witnessed (and felt) great frustration and feelings of complete and utter failure with computer use. Whether students are using pencil, paper, and books or whether they are using high tech equipment to complete their work, they will feel successful to the degree that they have control over their own learning. To use motivation, cooperation, or learner self esteem as a prominent reason to turn on computers or to expect that they will be magical, motivating, mind-enhancing machines is ridiculous. This is where studies making claims that computers promote better learning may be somewhat misleading. Computers *may* help, but this cannot be used as any kind of guarantee. In the final analysis, what must be taken into consideration is the overall effectiveness of each and every method we use to promote good teaching and learning. **Cognition** Turning to research that specifically targets computers as they relate to the cognitive domain, we find staunch enthusiasts and devout doubters. In this section I will highlight two types of software: skills-based and interactive. Discussion of the Internet will be included in later sections. There are claims that certain kinds of software can actually hasten cognitive development (e.g., Papert, 1980). Assumptions such as these could well be correct; however, we should question *why* it is important to hurry children into maturity and *what* the possible consequences could be. Before we get too carried away, we need to seriously think about what constitutes quality learning and quality of life. It is almost becoming a given that children are involved in so many different activities that free, imaginative play is being crowded out and childhood stress is on the rise (Elkind, 1995). What is important in terms of classroom use is our ability to discern what is appropriate software and what is worthless. The technology learning outcomes set out by Alberta Education are generally skills oriented (for example, students are expected to be able to demonstrate basic understanding of operating skills; know how to organize and manipulate data; be able to access information; learn how to integrate various applications; etc.). Whether we are using straight skills programs, CD ROMs, or the Internet, we always need to be questioning the merit of technology above other forms of learning. It is generally accepted that software design in the early days of computer technology was of poor overall quality, even amongst those whose eyes [virtually] light up at the mention of computers, but this does not mean that all software is of high quality, either. **Skills Based Software** Turning first to skills development, new and improved skills oriented software, although under threat of being supplanted by newer interactive technology (e.g., multimedia and CD ROMs), are still up and running. These programs may still have a place in schools or homes, but they need to be of superior quality or interest is soon lost to the tedium and repetition associated with practicing a particular skill. Skills software usually targets two main subject areas: mathematics and reading. Computer programs focusing on math skills are perhaps the most common of the two. Memorizing math facts via an electronic medium may be interesting for some, but it can also get very dull very quickly. If children need support in memorizing basic facts, the cost of software should be weighed against the cost of simple, effective math oriented games that have withstood the test of time (e.g., Snakes and Ladders, Bingo, or card games). One of the disadvantages to computers is that they usually limit the number of participants to one or two players. More than one person at a computer is usually awkward because in order to see properly, one needs to be looking directly at the screen. Traditional games, on the other hand, can usually be played in groups of at least two people, which can be more fun and at the same time allows greater opportunity for dialogue, explanation, and discussion, which are known to be important factors in solidifying understandings. Reading programs, too, need to fall under close scrutiny. As Oppenheimer (1997) discovered, even the "better" programs may prove to be so focused on skills that creativity and innovative thinking are sacrificed. Using a computer program to boost skills may be a novelty at first, but if students are struggling within any particular subject, there are often more cost effective, interesting ways to learn that could involve human interaction. Usually, if children are having academic difficulties, what they are likely to need most is the opportunity to ask questions or receive clarification. Software cannot always provide this kind of interaction. **Interactive Software** Despite promises, even the new interactive programs cannot be fully relied on to promote cognitive development. As brought to light by Healy (1998), there is a general lack of hard evidence that computers can enhance or accelerate learning. Armstrong and Casement (1998), in their extensive review of computers in schools, had similar reservations, and even found research that proved the opposite: students using computer programs were found to have lower scores than those taught by more "traditional" methods. Some computer programs may be replete with bells, whistles, and animations, but if there is a basic misunderstanding of a concept, dialogue with a human being is more likely to sort out the problem. Also, as witnessed by Oppenheimer (1997) during his visit to a special needs classroom, too many special effects can become distracting to the point of being almost completely counterproductive. It is this type of sensory overstimulation that also issues from television and electronic games, and it is this sensory overstimulation that many people believe contributes to short attention spans (e.g., Sanders, 1994; Healy, 1998; Armstrong and Casement, 1998). Using such software with children who are having difficulty concentrating could worsen, rather than improve, their chances of achieving success in school. **Health Concerns** Perhaps the most important concern connected with computer use, and the one to which is paid the least attention (although not completely ignored), is their potentially negative impact on children's health. Whether it is the responsibility of the government, Alberta Education, provincial school boards, individual schools, administrators, or teachers to disseminate more complete information about potential health dangers is a moot point. Alberta Education, in its "Information and Communication, Technology" document, does include computer safety as one of the foundational knowledge components under the heading: "Students will practice the concepts of ergonomics and safety when using technology". The specifics for each Division are listed below. Division I (Grades One to Three): 1. demonstrate proper posture when using a computer 2. demonstrate safe behaviours when using technology Division II (Grades Four to Six): 1. demonstrate the application of ergonomics to promote personal health and well-being 2. identify and apply safety procedures required for the technology being used Division III (Grades Seven to Nine): 1. identify risks to health and safety that result from improper use of technology 2. identify and apply safety procedures required for the technology being used Division IV (Grades Ten to Twelve): 1. assess new physical environments with respect to ergonomics 2. identify safety regulations specific to the technology being used What is not outlined in this document is the exact meaning of each of the above descriptors. It is interesting that it is only when students reach Junior High School (Division III) that any reference is made to the potential that health risks even exist beyond postural concerns. Ergonomics is, of course, very important, but a much more potentially dangerous effect is radiation. If we are to believe researchers such as Palmer (1993) -- and there is no reason not to -- it is the younger children who may be in greatest danger because they are still developing physically and mentally. Perhaps the most well documented and the most likely to affect large numbers of people are the problems associated with vision. Not only do we need to be aware of flicker, jitter, glare, and resolution problems, Healy (1998) brings to light (no pun intended) that visual irritation can be worsened by improper lighting (too little, too much, or the wrong kind). There are few things we can do in classrooms about the *kind* of lighting, but we can ensure that computers are in the best location to minimize glare. As more information on this topic becomes available, it may become an issue that school boards will be forced to address, but until such information becomes more easily accessible to the general population (i.e., through newspaper articles, television news reports, etc.) it is unlikely that it will be addressed in any way other than what now exists in the Information and Communication Technology document. **Summary Comments** The issue of computers in schools is indeed a complex one. Irrespective of whether research can prove that they are necessary tools for schools or that they are a complete waste of time and money, they are in the classrooms to stay. Not only must we utilize these machines judiciously in concert with mandated curriculums, we must exercise caution in *how much* time students spend using them, regardless of whether they are used for motivation, cognitive development, learning basic skills, or Global Learning Networks. Although Global Learning Networks may appear to be a distinct subject area within computer application in the classroom (as opposed to the much larger issues of the role and significance of computers in classrooms), they are not mutually exclusive, rather the one encompasses the other and each sheds light on the other. One involves computer application in the classroom and issues surrounding use and the other is a way of using the technology to promote transformative pedagogy. Prior to discussing the outcome of the empirical study as it relates to Global Learning Networks, it may be helpful to again highlight the pedagogical approach that is inherent within our school system, as it pertains to cultural diversity, in Calgary. The Calgary Board of Education states in its Quality Learning Document regarding significant learning outcomes: "[students will be]...aware of, appreciate, and accept cultural and personal differences." (p. 15) This approach, as stated earlier in this thesis, is progressive in nature. It would seem to recognize the validity of different world views but there is no clear provision for critically examining cultural beliefs, biases, or values; in particular those of our society are not open to question. This is not to say that they do not promote acknowledgment of differences. Within the description of teacher understandings, it is stated that: "Teachers value diversity within a responsive environment by:...recognizing the validity of different world views and life experiences..." (p. 14) This is connected to their suggestions of ways that students value diversity: "... [by] sharing their beliefs and experiences with one another...respecting others' rights to different beliefs and values...valuing different forms of expression...valuing one another..." (p. 14) This is valuing without evaluating. Part of the purpose of this study was to determine the likelihood of Global Learning Networks becoming part of teaching and learning within a particular elementary school. The simple response to this is, it is not at all likely at this time. Underlying this simple response are somewhat more complicated factors based on two types of obstacles. One has to do with practical concerns and the other has to do with pedagogical beliefs. The major obstacles in terms of practicality are time, professional development, and available resources. One of the biggest difficulties teachers face is delivering the curriculum in a meaningful, effective, timely manner. As it is, we are often racing against the calendar to ensure that the skills, knowledge, and attitudes of one grade have been successfully met before sending children on to the next grade. Global Learning Networks may fit within an existing curriculum, but time becomes an even bigger issue than it is under normal circumstances due to the background knowledge and skills that the teacher must possess prior to introducing this to students. Reading the success stories about Global Learning Networks makes the process sound relatively straightforward; certainly rewarding and definitely worthwhile. Not only are children using computer technology, they are making meaningful connections with other children and they are developing tolerance and deeper understanding of issues that affect other cultures. This makes for a nice fit within existing curriculums and within the Quality Learning Document. However, not only must one access such networks through the Internet, one must have the patience and knowledge to get there in the first place. The actual mechanics involved in the search, implementation, and follow through are much more involved than merely introducing an exciting new topic to students. Each of the respondents in the interviews mentioned the issue of time in the sense that in school, there is never enough of it. This was acknowledged as an obstacle by an administrator also: "If they [Global Learning Networks] have been investigated, certainly I would support them as far as a way to enhance learning...but I couldn't see a classroom teacher taking that on because of the time...there's so much going on in their busy lives. Unless you threw everything else aside, but we don't have those freedoms here...curriculum mandates, provincial testing...we can't throw away curriculum and do our own thing...I think it would be very difficult for teachers to find time in this area..." Even if one individual undertook the legwork and presented a package of information to staff members, there are other problems. These arose in discussion with the above administrator. Not only is there an objection to anything presented in a complete format, the point is made that it must be bought into at an individual level: "...I'm not into package learning at all. I have always been resistant to any set program, that's why I struggle with Reading Recovery. It's so prescriptive and I have never been one to support prescriptive learning. I don't feel that a package of any kind fits. It has to be right for the individuals you are working with." Time is at issue on several levels: we have to complete the curriculum, and if we venture outside the curriculum, we are not likely to get support from the Calgary Board of Education or Alberta Education in the form of information, resources, or professional development. Teachers need time to prepare and gain skills. Connected to time concerns and perhaps overlapping, is resources. Computers are a given, the Internet is a given. These two resources are absolutely necessary to participate in a Global Learning Network. Without training, or money set aside for training, however, it is left to teachers themselves to pay any expenses incurred. Most teachers do not feel that they have the financial resources, the energy, or the time to add more to their workloads. As it is already, we are expected to pay for courses to assist us in achieving the Technology Learner Outcomes from the mandated Information and Communication Technology document. The second major impediment to Global Learning Networks involves philosophically-based issues. When one of the respondents in the study mentioned world paradigms colliding, what was brought into focus was pedagogy. Progressive pedagogy is what is promoted by Alberta Education and what is acceptable to parents, teachers, and administrators. Transformative pedagogy, on the other hand, involves questioning the status quo, which is often equated with values education. And anything to do with values education is treated with extreme caution. ISSUES AND IMPLICATIONS This section will address some of the specific issues connected to computers as they are incased within two broad domains: pedagogical considerations (with a focus on values-free versus values-laden education) and philosophical matters (with a focus on world view). From Alberta Education directives, the superficial conclusion to be drawn is that computers are present in classrooms for two main purposes. The first purpose involves a practical approach (with no explicit values attached): schools exist, in part, to prepare students for entry into society. In the words of the "Information, Communication, and Technology" document: "...young people need to acquire specific knowledge, skills, and attitudes in order to become self-reliant, responsible, caring and contributing members of society." (p. 2). The second purpose, an extension of the above, is articulated in the "Framework for Technology Integration in Education" pamphlet and reflects philosophical underpinnings (or world view) based on being competitive in the global marketplace (also with no explicit values attached): "Our success in the global economy depends, in part, on the effective integration of technology in education." (p. 1). Although both of the above aspects seem to be one and the same, upon closer inspection, it becomes clear that there is a point of departure and that one (pedagogy) derives from the other (world view). It also becomes clear that promises of a values-free education in Alberta may be actually fraught with contradictions. Pedagogical Considerations and Contradictions "Our work environment has been transformed with computers, fax machines, networks, and an ever-increasing emphasis on information as a commodity and as a resource." (Cordell, 1993, in Elliott; p. 45). The ever increasing emphasis on the acquisition of knowledge in school appears as simply a matter of fact: our education system needs to keep up with our changing society, and computer technology is present to assist students in accessing the latest information. This has seeped into the psyche and conversation of teachers and administrators, and it appears to be a foregone conclusion. Even those who may question the validity of this technology, or are undecided about the benefits, believe that we must have computers in our classrooms: "I can see the business aspect [from a negative point of view]...but we're doing the children a disservice if we're not using it [computer technology] in school..." "Whether it's [computer technology] the right way to go -- I don't know that; I don't think anyone knows that. We have to do the things that are appropriate at the time and place. We would be foolish to ignore technology. We have to make sure that we provide children with the best opportunities we can. To ignore technology, I would question whether we are doing an injustice. They deserve the right to move forward." This reflects a pedagogy that is progressive in nature and free of explicit values-teaching. And, as mentioned previously, we are very cautious when it comes to topics that could be construed as values-laden. This comes indirectly from the "powers that be" and directly from school administrators. During the interview process with an administrator, I asked if the school should be responsible for distributing information surrounding health concerns and computers to parents. The reply: "That's teaching values...That's the parent's responsibility. We can only do so much at school. We can alert parents, but we can't make them read an article or change their values. We can't teach values. If we start sending home slanted articles in one direction or another, that's propaganda." This is a primary example of explicit versus implicit value systems. We are not allowed to teach values, but by omission, we are in fact catering to a certain value system. Schools in this way exist to teach tasks. They do not exist to teach questions about the role of technology in society. We can ask questions about society, perhaps, but it is stated as our society, not the global society. This is part of accepting the status quo, which has an inherent (implicit) value system embedded within it, and that value system translates into a belief in a capitalistic, individualistic, technologically mediated view of society. In other words, the current path we are on is the right one. Admittedly, there are many teachers who are fully aware of global concerns but this awareness is not derived from the "Information and Communication Technology" document. When we place this within the context of Global Learning Networks, it gradually becomes apparent that the reason there is not a more direct mandate to utilize these is because this type of inquiry would be working at cross purposes with implicit pedagogical approaches to teaching. It is not that we are directed not to participate in these kinds of learning activities (and obviously many teachers have ventured into this domain), but demands to complete the curriculum have virtually foreclosed opportunities for most busy teachers to participate in anything beyond the curriculum requirements. If the Calgary Board of Education or Alberta Education really believed that this should be a priority, funding for professional development would suddenly appear, workshops would be available, and discussions would occur at staff meetings. (This returns us to the comment from a teacher who stated that if Alberta Education valued hanging from our toes, we would have a course in hanging from our toes.) We may conclude, then, that progressive pedagogy does indeed carry its own inherent (implicit) set of values, and that computers in schools may be something more than just tools. This awareness was present in at least one of the interviewees within the empirical study: "Everything in a classroom is value-based; we don't live in vacuums. The computer itself is a value because someone constructed it — not just for others, but for themselves." We can choose to believe that teaching facts carries no values or hidden agenda, but in doing so we may not be facing the "facts" ourselves. Pedagogy does not stand on its own. A vital element, and one which I believe informs any sort of pedagogy, relates to world view. **Philosophical Matters and Manoeuvrings** Along with preparation for the workplace, computers in education are justified by reference to the need for the skills necessary to be competitive in the global marketplace. Information technology is viewed as providing the way to achieve that success, and computers are seen as a viable medium. Computers in the classroom also have a symbolic meaning. They are the embodiment of a mythology to which we unconsciously ascribe. Smith (1999) refers to this as: "...the vision of the good life...premised upon industrialized, technologically advanced civilization..." (p. 8) In essence, this is an extrapolation of the current trajectory. This vision of the future is based upon what Smith (1999) refers to as the science/technological/rationalist paradigm (or world view) which in turn is premised upon anthropocentrism. This anthropocentric world view places humans at the centre of being with other animals and the world itself as props for the human drama. This has been further elaborated to a centering on the individual, a disengaged model of the human subject (Taylor, 1991). The world view alluded to above is one that had its beginnings hundreds of years ago with the ancient Greeks who developed the concept of the atom, which provided them with a distinct division between spirit and matter. This eventually gave rise to modern physics, and this separation was furthermore developed by thinkers such as Rene Descartes (mind-body dualism). Current western thought is dominated by this subject/object split, and arising from that is a mechanistic, fragmented world view. While this type of thinking may serve to promote our progress on an individual — or perhaps societal — plane, it may subtly yet actively be bringing about a halt to our progress as a species. It may be useful to explore in somewhat more depth the concept of world view. If we were to apply "either/or" thinking to general views of our world, we could perhaps analyze the extreme ends of the continuum (and, of course, we would readily see world views in direct opposition to one another). At the one end, descriptive words might include "rational, linear, scientific, left brain, reductionistic, atomistic" — in other words, that so typical of Western thought. At the opposite end we would apply words such as "holistic, right brain, balance" -- also known as Eastern thought. Mander (1991) explores this duality. He presents a "Table of Inherent Differences" (p. 215-219), in which he undertakes a comparison between "technological peoples" and "native peoples". Topics explored include: economics (e.g., private and corporate ownership versus no private ownership of land, water, minerals, etc.; competition versus cooperation); politics and power (e.g., concept of state versus identity as nation; centralization of power versus decentralization of power); sociocultural arrangements and demographics (e.g., conquest of nature versus harmony with nature; humans viewed as superior life form versus humans equal part of web of life; Earth viewed as "dead" versus entire world viewed as alive: plants, animals, people, rocks); architecture (e.g., construction design -- survive individual human life versus construction design -- materials biodegradable in one lifetime; space designed for separation and privacy versus space designed for communal activity); and religion and philosophy (e.g., linear concept of time, deemphasis of past versus integration of past and present; time measured by machines versus observance of nature; saving/acquiring versus sharing/giving; dead regarded as gone versus dead regarded as present; separation of spirituality from rest of life versus spirituality integrated with all aspects of daily life). Many Canadians (and, we hope, Albertans) are very conscious of alternative approaches (such as outlined above), but overall, we are still part of a strong capitalist system. Taylor (1991), in his exploration of Canadian society, identifies three features, or "malaises" of modern society that we should be aware of as Canadians (and which seem to apply directly to our prevailing attitudes). These include: individualism (i.e., a focus on the self almost to the point of narcissism along with a preoccupation on the attainment of material possessions); the primacy of instrumental reason (i.e., the use of economics as a baseline for all activities along with the dominance of technological solutions even though this may not be an appropriate approach to all of our difficulties); and our general withdrawal from participation in the political arena (i.e., general denial, powerlessness, and false sense of security in our highly organized political structures). All of these factors ultimately contribute to a fragmented society. Graham (1993) also explores the issue of our current technological preoccupation, explaining that this mode of thought is and has historically been very deeply embedded and unquestioned within our collective psyches: "It is a value system most of us in the Western world have paid allegiance to...as a result of history, education, and the collective habits and patterns of the technological society in which we currently live...shared not only by individuals but by whole nations and even cuts across ideological barriers." (p. 17) When anything is so deeply entrenched, there is little hope for quick transformation. Graham refers to this as "technological optimism", carrying with it the notion that we happily place machines above our own power as human beings, which furthermore implies an "...unstinting faith in progress." (p. 19). Technology has served to enhance the disengaged aspect of modern life. Much of our interactions with others is mediated by technology. Children interacting with a computer in isolation are not experiencing the face to face communication that is necessary in becoming fully human. Words typed into a computer do not fully convey the meaning implicit in statements the way body language and intonation do. The scientific-technological world view has given rise to the dominance of instrumental reasoning. Taylor (1991) provides the following definition of instrumental reasoning: "...the kind of rationality we draw on when we calculate the most economical application of means to a given end. Maximum efficiency, the best cost output ratio, is its measure of success." (p. 5). Instrumental reasoning is very much allied with the production model: input (information) -- processing -- output (knowledge). The factory model of education is predicated upon the production model of knowledge. The growth of economism (the belief that economics is the sole arbiter of values in our society) is based on the high regard of instrumental reasoning (Caldwell, 1990). Economics is a science built on finding the most efficient means of production and distribution. The interest of business in education is a fostering of instrumental reasoning at the expense of a more liberal view of education where other values are permitted and discussed. The current argument of values in education focuses on the scope of instrumental reasoning in the curriculum. By not teaching values, the role of instrumental reasoning expands into areas where it does not necessarily lead to wise decisions. Johnathon Swift's "A Modest Proposal" illustrates the absurdity of expanding instrumental reasoning too far. Students interacting with computer software with its amazing computational abilities may feel an enhanced sense of power in their ability to control this powerful machine and participate in the real world of adults without fully developing their emotional intelligence. Global Learning Networks and the explicit transformative pedagogy which they encompass stems from a different world view. The science of ecology is part of the same world view. Humans are no longer at the pinnacle of creation, rather they are nodes in an interconnected web of being that exists through mutual interaction. We exist both within human communities and biological communities. Global Learning Networks seeks to build links between human communities. The model of knowledge most appropriate to an ecological world view is based on a growth model as opposed to the production model of knowledge (Franklin, 1990). "Within a growth model, all that human intervention can do is to discover the best conditions for growth and then try to meet them. In any given environment, the growing thing develops at its own rate." (Franklin, 1990; p. 27) To a certain extent, the growth model has been encompassed by progressive pedagogy where the outcome is lifelong learners. Unfortunately, the optimum conditions for growth are not met without the transformative aspect of education where the fundamental values of society are open to question. Although instrumental reasoning should not be underestimated (we are still going to need engineers, scientists, and computer analysts), rather what is needed is the concurrent development of an emotional/moral intelligence. The "Information and Communication Technology" document does have a clause within the general learner outcomes that states: "Students will demonstrate a moral and ethical approach to the use of technology..." (p. 7). However, this "moral and ethical approach" refers to specific outcomes such as: "Students will demonstrate courtesy...work collaboratively to share resources... recognize and acknowledge the ownership of electronic material...respect the privacy and products of others...comply with copyright legislation, etc. (p. 10). There is, in Division IV (i.e., high school), one carefully worded reference made to the possibility that technology may not be perfect: "Students will demonstrate an understanding of how changes in technology can benefit or harm society..." (p. 10). Even this, however, only requires understanding, not critically evaluating or questioning the broader issues of technology. Global Learning Networks recognize that we are not merely disengaged thinkers but rather our identities are developed through dialogue and we are embodied and interconnected with the physical world. The values implicit in transformative pedagogy are perhaps best stated by Ronald Gutstein (1999) when paraphrasing Neil Postman (1996): "In this scenario the purpose of education would be to inculcate in children a value of lifelong caring for the environment, not necessarily lifelong learning for industry." (p. 229). Instrumental reasoning is thus enframed by our value systems which arise from embodied, emotional nature and the dialogical construction of our identities (Taylor, 1991). Computers in the classroom, within a progressive pedagogical framework, can be seen as furthering the atomist/instrumental stance by isolating students and further embedding them in the technological society governed by instrumental reasoning. If, however, computers are used within a transformative pedagogy, they can be seen as opening a window to a wider view of the human community as encompassed within an ecological community. This brings us one step closer to exposing some of the contradictions within society and within education, but the influence of two important and powerful elements is yet to be revealed. **Business and Political Interests in Education** The way in which politics and business are connected with our technological focus in Alberta schools may not at first be obvious, but if the time is taken to probe somewhat further into the relationship that exists below the surface amongst business, government, and schools, we discover not only that the mass introduction of computers into our school system does indeed reflect a value system, but that this is part of a larger world view. Using Mander’s (1991) “Table of Inherent Differences”, we see that we fit very nicely at the linear, technological end of the continuum. Although it is claimed that public education should be values-free, the computer issue, along with certain other "modern" events may suggest otherwise. The current directives of our provincial government, in conjunction with Alberta Education and the business community do indeed seem to reflect a heavily laden value system. To explain this further, our current government obviously promotes a free market, free enterprise system based primarily (some would say solely) on economics. Caldwell (1990) explores in depth the issues that drive our policies, and along with economics, includes the relationship among science, the environmental movement, and politics. He speaks of economism (i.e., placing disproportionate emphasis on economic values while undervaluing all else), scientism (i.e., the oversimplified belief that science will solve all human problems), and technologism (also known as the "technological fix"). He furthermore exposes these applications as resulting from "...linear track thinking that pushes...[certain considerations] too far, to the exclusion of other equally significant factors." (p. 30). These beliefs seem to be embedded within our own Canadian society, and particularly within Alberta. Proof that the mentality of those Albertans who possess decision making power within our education system is summarized nicely by the "Framework for Technology Integration in Education" document. In fairness, however, it must be acknowledged that the Alberta Education collective claims to have requested input from all stakeholders (parents, teachers, employers, etc.). On the other hand, we should perhaps maintain a healthy scepticism. Kachur (1999, in Harrison and Kachur) points out that: "The rhetorical strategy for value judgment defines...standards with reference to a particular definition of what 'Albertans' value. It is based on mass public-opinion polling or selective consensus-building forums with 'stakeholders'; such events and knowledge are thus used to justify particular values. This strategy is politically expedient because the massive number of consultations and polling possibilities create a situation where politicians can pick and choose values as they would the flavour of the month." (p. 69). Business/school partnerships are a growing phenomenon. There are several general reasons that we have witnessed the entrance of the business community into the sphere of education. The first has to do with job skills and the second has to do with funding. In recent years, we have been informed by various spokespersons from the business community that our education system is not properly preparing students for the workplace. This has created a good deal of confusion and fear amongst many people, which has in turn created somewhat of a furor over curriculum content and delivery of education. As previously mentioned, a major player in the reform arena has become the Conference Board of Canada, whose interests lie in future prosperity. Theirs is an interest based on economism, and they are able to weave their interests with those of schools: "...the skills listed in this profile [i.e., their "Employability Skills Profile"] are already explicit or implicit in general educational goal statements of the provinces and territories." (p. 5). This draws attention back to the "Quality Learning Document", the "Information and Communication Technology" document, and Kachur's above statement. Both of these documents state that they have received input from a variety of laypeople, and while it may seem as if their opinions stem from reasoned personal judgments, they may well indeed only be reflecting what society has already influenced them into believing, thus any "input" from stakeholders is merely a recitation of what has already become entrenched. A second general reason that schools have had greater involvement with business organizations has to do with money. Governmental funding to education has dramatically decreased, both on a federal and provincial level, leaving quite a noticeable gap in classrooms across Canada. Fundraising activities at the school level, once used largely for "extras," are now oriented towards providing more of the "necessities." The success of these fundraisers varies from school to school (obviously stemming from socioeconomic realities in any particular district), but increasing pressure is being placed on communities to fulfill the role of providing basic resources. In some cases, business partners will provide schools with certain tangibles, but financial support in the form of cash is rare. This desire to promote the performance of Canadian organizations appears to represent the feelings of those having a vested interest productivity rather than a significant interest in participating in the development of a society with a genuine interest in personal, social and intellectual development. While parents and community members may be inclined to believe that their efforts are contributing to the financial well being of schools, we must ask at what point this altruism will end. Similar to community efforts to provide for the poor in the form of food banks, shelters, etc., are we not merely absolving the government of its responsibilities to equitably distribute funds to promote the overall wellness of our society? Barlow and Robertson (1994), too, are suspicious of business interests in schools, stating three specific goals that corporations have in schools: "...to secure the ideological allegiance of young people to a free market world view on issues of the environment, corporate rights and the role of government...to gain market access to the hearts and minds of young consumers and to lucrative contracts in the education industry...[and] to transform schools into training centres producing a workforce suited to the needs of transnational corporations." (p. 79). Some may believe that this is an overstatement (or oversimplification, depending on one's perspective), however, we must recognize that a threat to the development of critical thinking may exist and take measures to guard against the possibility of shaping young minds to become faithful, obedient consumers and employees. Barlow and Robertson continue their criticism of business involvement in schools: "...business provides speakers and materials...implicitly or explicitly representing free enterprise theory as some sort of natural law of economics. This is desirable from a corporate point of view...[but] it undermines the school's ability to help students to learn to think critically about economic issues and smacks of the kind of indoctrination we...criticize in totalitarian states." (p. 80). While the business community may object to this claim, it does raise doubts when we stop to consider that typically when a partnership forms between individual schools and businesses, there is a great flurry of activity within the school to acknowledge their support whilst at the same time promoting the virtues of the particular business. Not only is this message delivered directly to students, it is also communicated (either directly or indirectly) to parents. If businesses were truly interested only in providing support to schools and students, one would think that they might participate anonymously. I would dare to guess that this is usually not the case. In some situations, business partnerships are formed without any financial attachment, but rather appear as consultants concerning the skills they deem to be necessary for students to develop in order to succeed in the marketplace. Several principles are deeply embedded within this mentality, including the importance of competition (i.e., we must develop those skills faster and better) as well as the importance of schools as being business training grounds. If it is ultimately decided that this is indeed the purpose of schooling, we have no argument against any of this; however, this has not been explicitly articulated, yet. Thus, with all of this information at hand, it seems that our technological, business focus is more than a drive for access to another means of learning, but rather promotes underlying principles that are in direct opposition to the formation of a truly global community which spans cultures, countries, and continents. Obviously the "value" of such global thinking is met with great scepticism (if indeed it is thought about at all by the general populace), and without strong support from politicians and the media (which we cannot deny exert a powerful influence over our actions and thoughts). Therefore, any move towards a more holistic approach to life and education from a structural level is likely to occur later than sooner. Closely tied to business interests is politics. Many people undoubtedly believe that politics should not and does not play any role within our education system, but there is some evidence to suggest otherwise. Barlow and Robertson (1994), Mander (1995, 1996), and Robertson (1998) – among others -- would support the allegation that politics not only plays a role but that it drives the curriculum. For example, curriculum content may appear to be unbiased, but when one stops to ask who decides what should be taught and how it should be done, we come to realize that many sources of information have been written from one perspective (such as the "discovery" of America). Barlow and Robertson (1994) remind us that teaching and curriculum directly or indirectly influences thinking about "...privilege and power through the topics it evades as well as those it addresses. (p. 79). Within the context of the classroom, we see that we can indeed be shaping children's thinking through allocation of funds -- i.e., by elevating the educational status of computers (which leads some people to believe that we are promoting the thought that we can resolve global problems through a "technological fix"), we may be sending certain other messages, particularly that school is a place to train for success in the business world. And all the while we may be exposing children to certain health risks -- even though the evidence (i.e., the number of serious illnesses such as cancer) may be regarded as inconclusive at this stage. In addition to questionable merit in much of the software, Roszak (1994) presents another interesting caution regarding computers in the classroom: "Introducing students to the computer at an early age, creating the impression that their little exercises in programming and game playing are somehow giving them control over a powerful technology, can be a treacherous deception. It is not teaching them to think in some scientifically sound way; it is persuading them to acquiesce. It is accustoming them to the presence of computers in every walk of life, and thus making them dependent on the machine's supposed necessity and superiority. Under these circumstances, the best approach to computer literacy might be to stress the limitations and abuses of the machine, showing the students how little they need it to develop their autonomous powers of thought. (p. 242). To reiterate: embedded within our technological focus is a fundamental world view, or paradigm. How or whether this directive links, assists, or benefits other societies on the whole does not seem to be part of our daily deliberations in education. An integral aspect of our quality of interaction — or perhaps feelings of responsibility — towards other countries (hence our planet) depends to large extent on our own personal and/or societal viewpoint of reality. Attempting to find a definitive definition of reality, however, is almost like seeking the elusive needle in a haystack, for whose reality (or rather perspective of reality) do we utilize when evaluating our roles in world affairs? And whose perspective of reality represents the "best" truth? And to whom do we really owe allegiance in the first place? And at what point are we disempowering ourselves and others in our attempts to apply solutions to these questions? This is, of course, a highly subjective process and it is very difficult indeed to place the "smaller picture" within the "bigger picture" without acknowledging our own biases. **Concluding Remarks** Before this thesis is laid to rest, there are yet a few brief parting comments, quotes, and recommendations to be made. One of the messages that came through repeatedly within the empirical study (and within some of the library research) was that of teacher voice (or lack thereof). One of the biggest obstacles to feelings of success and completion was lack of time. There is widespread recognition that more and more is being required of teachers, not only in delivering curriculum but in dealing with a vast array of childhood social/emotional issues; however, as the workload increases, teachers continue to respond, duty-bound, reaching to almost superhuman limits. This is part of an acceptance of the "order" of things: we are all doing "more with less". In a number of different ways, everyone who participated in the interviews expressed feelings of powerlessness (in varying degrees): there is a general lack of consultation when new directives are issued; the orders come, and they are followed through. Mazurek (1999, in Harrison and Kachur) explains the historical aspect of change within education: “Unfortunately...changes in the past almost invariably happened without input from teachers except in their resistance to reform at the classroom level...It is astounding but true that teachers historically have been and continue to be almost completely, as the phrase goes, out of the decision-making loop.” (p. 18). Why do we continue to accept this as a fact that we have to live with? Mazurek believes that this is due to inadequate teacher preparation: “Students in Bachelor of Education programs across Canada are poorly prepared in the skills of social-economic-political analysis. The focus of teacher education programs today is almost exclusively technical.” (p. 19). Undoubtedly, political savvy would be of assistance, but what teacher has the time? Mazurek suggests that the Alberta Teachers’ Association could be more involved in this area. This has also arisen in casual conversation amongst teachers in staff rooms. It is highly unlikely that funding or support will come from elsewhere; perhaps the ATA is our only alternative. Another area of great importance is that which surrounds media literacy, not only at the teacher level but at the student level. Cordell (1993, in Elliott) discusses the importance of this with television, but his words also apply to other forms of electronic media (e.g., computers, video games). He suggests that media literacy should begin as soon as children enter school. "Images are created to convey a message...With images there is no true or false. There is only acceptance or rejection on the basis of whether we like or dislike the image...children must be sensitized to how and under what conditions programs are delivered to them...it can only make them more aware; awareness has to be a precondition for informed citizenship (p. 49-50). Cummins and Sayers, emphasizing the importance of the development of critical literacy skills, underscore the significant role that education should play: "If our schools abdicate the cultivation of critical literacy, the next generation will be even more subject than ours to manipulation by those who control the media...[the more we succumb to media persuasion and omission of divergent perspectives, the more democracy [will merge] into totalitarianism." (p. 172). To bring the classroom into the global sphere, Harrison and Kachur (1999) remind us that: "...educational change in Alberta cannot be separated from broader changes occurring throughout the Western industrialized world and, indeed, everywhere under the rubric of globalization...more than ever -- the meaning and purpose of education is being reduced to that of servant to the economy, in particular, the dominant corporate elite." (p. 177). This brings to mind that the kind of pedagogy that will best serve all of humanity can only be transformative. David Suzuki (1997) writes that: "...[it is the] relationships between human and nonhuman beings [that] still form the core of the important things in life...." (p. 210). What has been alluded to throughout this thesis is the importance of working together and seeking changes that will impact positively in the present as well as in the future. If we wish to inspire our children to ask questions, we must ask questions. If we wish to engender holistic thinking, we must ourselves demonstrate that we consider the planet to be one interconnected whole. The appearance of the global market has translated into a borderless world and the unprecedented growth of capitalism. As teachers and as responsible citizens we must hold the future of our children foremost in our thoughts and do our level best to foster the kind of freedom and creativity that will be necessary to see humanity through the next century and beyond. REFERENCES Alberta Education. (1997). Learner Outcomes in Information and Communication Technology ECS to Grade 12: A Framework. Alifrangis, C. An integrated learning system in an elementary school: implementation, attitudes, and results. ERIC ED325100. Armstrong, A., and Casement, C. (1998). The Child and the Machine. Toronto: Key Porter Books. Barbules, N.C. and Callister, T.A., Jr. (1996). Knowledge at the crossroads: some alternative futures of hypertext learning environments. Educational Theory, 46, 23-50. Barlow, M., and Robertson, H.J. (1994). Class Warfare: The Assault on Canada's Schools. Toronto: Key Porter Books. Barker, T., and Torgesen, J. (1995). An evaluation of computer-assisted instruction in phonological awareness with below average readers. Journal of Educational Computing Research, 16, 89-105. Bergin, D.A., Ford, M.E., and Hess, R.D. (1993). Patterns of motivation and social behaviour associated with microcomputer use of young children. Journal of Educational Psychology, 85, 437-445. Birkerts, S. (1994). The Gutenberg Elegies: The Fate of Reading in an Electronic Age. New York: Fawcett Columbine. Brown, W. And Vockell, E.L. (1996). The benefits of using a computer work station for information-intensive classes. NASSP Bulletin, 80, 97-104. Brush, Thomas A. (1997, Fall). The Effects of Group Composition on Achievement and Time on Task for Students Completing ILS Activities in Cooperative Pairs. Journal of Research on Computing in Education, 30, 2-17. Caldwell, Lynton (1990). Between Two Worlds: Science, the Environmental Movement and Policy Choice. Cambridge: Cambridge University Press. Calgary Board of Education. (1998). Quality Learning Document. Calgary Board of Education. (1997). New Ways of Thinking New Ways of Processing New Tools. Draft Instructional Model Technology Plan. Judi Hunter. Callister, T., and Dunne, F. (1992). The computer as doorstop: technology as disempowerment. *Phi Delta Kappan*, 74, 324-326. Capper, J. (1988). *State Educational Reforms in Mathematics, Science, and Computers: A Review of the Literature*. Washington, D.C.: Center for Research into Practice. Chisholm, I. (1995). Equity and diversity in classroom computer use: a case study. *Journal of Computing in Childhood Education*, 6, 59-80. Choldin, E. (1993). The practice of global education. *Global Education*, January, 28-30. Clariana, R. (1994). The effects of an integrated learning system on third graders' mathematics and reading achievement. ERIC ED409181. Clariana, R. (1996). Differential achievement gains for mathematics computation, concepts, and applications with an integrated learning system. *Journal of Computers in Mathematics and Science Teaching*, 15, 203-215. Clark, D. (1992). Effective use of computers in the social studies: a review of the literature with implications for educators. ERIC NO. ED370828. Clements, D., and Meredith, J. (1993). Research on LOGO: effects and efficacy. *Journal of Computing in Childhood Education*, 4, 263-290. Coghill, J., and Wideman, R. (1996). Technology in the common curriculum. *Orbit*, 27, 7-9. Collis, B., and Stanchev, I. (1993). Exploring the nature of research in computer-related applications in education. *Special Issue, Computers and Education*, 21, 1-2. Collis, B. (1991). The evaluation of electronic books. *Educational and Training Technology International*, 28, 355-363. Commission on Global Governance (1995). *Our Global Neighborhood*. Oxford: Oxford University Press. Cordell, J. (1993). The perils of an information age. In: Elliott, P. (Ed.) *Rethinking the Future*. (Pp. 45-55). Saskatoon: Fifth House Publishers. Cronin, C.H., Feldman, P., and Prewitt, G. (1992, Winter). Introducing multimedia into the curriculum: a case study. *Education*, 282-285. Cross, B.E., and Molnar, A. (1995). Global issues in curriculum development. *Peabody Journal of Education*, 69, 131-140. Cummins, J., and Sayers, D., (1995). *Brave New Schools: Challenging Cultural Illiteracy Through Global Learning Networks*. New York: St. Martin's Press. Edwards, C. (1995). The Internet high school: a modest proposal. *NASSP Bulletin*, 79, 67-71. Ehman, L. (1992). Using computer databases in student problem solving: a study of eight social studies teachers' classrooms. *Theory and Research in Social Education*, 20, 179-206. Elkind, D. (1994). *Ties That Stress: The New Family Imbalance*. Cambridge: University Press. Employability Skills Profile (1992). Conference Board of Canada. Ennis, D. (1993). A transfer of database skills from the classroom to the real world. *Computers in the Schools*, 9, 55-63. Eraut, M. (1995). Groupwork with computers in British primary schools. *Journal of Educational Computing Research*, 13, 61-87. Evans-Andris, M. (1995). Barrier to computer integration: microinteraction among computer coordinators and classroom teachers in elementary schools. *Journal of Research on Computing in Education*, 28, 29-45. Farah, B.D. (1996). Information-literacy: retooling evaluation skills in the electronic information environment. *Journal of Educational Technology Systems*, 24, 127-133. Fletcher-Flinn, C., and Gravatt, B. (1995). The efficacy of computer assisted instruction (CAI): a meta-analysis. *Journal of Educational Computing Research*, 12, 219-242. Fletcher-Flinn, C., and Suddendorf, T. (1996). Do computers affect "the mind"? *Journal of Educational Computing Research*, 15, 97-112. Franklin, U. (1990). *The Real World of Technology*. Concord: Anansi. Gilman, D. (1991). A Comprehensive Study of the Effects of an Integrated Learning System. A Report Prepared for the Metropolitan School District of Mount Vernon, Indiana. ERIC ED409181. Graham, A. (1993). *The technological either/or: technological optimism or techno-ecological realism?* In Elliott, P. (Ed.), *Rethinking the Future.* (pp. 16-29). Saskatoon: Fifth House Publishers. Graham, C. (1995). Layers of learning communities: orchestrating a districtwide technology implementation. The central office internal facilitator's role in implementation of an integrated learning system. Paper presented at the Annual Meeting of the American Research Association (San Francisco, CA. April 18-22). Gutstein, D. (1999). *E.Con.* Toronto: Stoddart. Harrison, T., and Kachur, J. (Eds., 1999). *Contested Classrooms: Education, Globalization, and Democracy in Alberta.* Edmonton: The University of Alberta Press and Parkland Institute. Hay, L. (1997). Tailor-made instructional materials using computer multimedia technology. *Using Technology in the Classroom,* 61-68. Healy, J. (1998). *Failure to Connect: How Computers Affect Our Children's Minds – for Better and Worse.* New York: Simon and Schuster. Herard, Denis (1996). *Framework for Technology Integration in Education: A Report of the MLA Implementation Team On Business Involvement and Technology Integration.* Alberta Education publication. Hiltz, S, Johnson, K, and Turoff, M. (1986). Experiments in group decision making: communication process and outcome in face-to-face versus computerized conferences. *Human Communications Research.* 13, 225-252. Hornby, P.A. and Anderson, M.D. (1996). Putting the student in the driver's seat: a learner centred, self paced, computer managed, introductory psychology course. *Journal of Educational Technology Systems,* 24, 173-179. *In Focus* (1996, March). Alberta Education publication. Jakobsdottir, S., Krey, C., and Sales, G.C. (1994). Computer graphics: preferences by gender in grades 2, 4, and 6. *Journal of Educational Research,* 88, 91-99. Jensen, R. (Ed.) (1993). *Research Ideas for the Classroom: Early Childhood Mathematics.* ERIC ED404142. Jonson, H. (1996). *Framework for Technology Integration in Education: A Report of the MLA Implementation Team On Business Involvement and Technology Integration*. Alberta Education publication. Kang, S.H., and Dennis, J.R. (1995). The effects of computer-enhanced vocabulary lessons on achievement of ELS grade-school children. *Computers in the Schools*, 3, 25-35. Kinnear, A. Introduction of microcomputers: a case study of patterns of use and children's perceptions. *Journal of Educational Computing Research*, 13, 27-40. Knight, B., and Knight, C. (1995). Cognitive theory and the use of computers in the primary classroom. *British Journal of Educational Psychology*, 26, 141-148. Kolich, E.M. (1991). Effects of computer assisted vocabulary training on word knowledge. *Journal of Educational Research*, 84, 177-182. Langone, J., Willis, C., Malone, M., Clees, T., and Koorland, M. (1995). Effects of computer-based word processing versus paper/pencil activities on the paragraph construction of elementary students with learning disabilities. *Journal of Research on Computing in Education*, 27, 171-183. Lee, Y., and Lehman, J. (1993). Instructional cueing in hypermedia: a study with active and passive learners. *Journal of Educational Multimedia and Hypermedia*, 2, 25-43. Locke, L., Spiriduso, W., and Silverman, S. (1993). *Proposals That Work*. Newbury Park: Sage Publications. MacInnes, J., and Kissoon-Singh, S. (1996). Integrating computer technology into instruction. *Orbit*, 27, 30-33. Mahmood, M., Mo, A., and Hirt, S. (1995). Reasons schools are not efficiently using information technology: a case study. *Journal of End-User Computing*, 7, 22-28. Mander, J. (1991). *In the Absence of the Sacred: The Failure of Technology and the Survival of the Indian Nations*. San Francisco: Sierra Club Books. Marshall, C., and Ross, G. (1995). *Designing Qualitative Research*. Thousand Oaks: Sage Publications. Mathew, K. (1997). A comparison of the influence of interactive CD-ROM storybooks and traditional print storybooks on reading comprehension. *Journal of Research on Computing in Education*, 29, 263-275. Mayer, R.E., and Sims, V.K. (1994). For whom is a picture worth a thousand words? Extensions of a dual-coding theory of multimedia learning. *Journal of Educational Psychology*, 86, 389-401. McNeil, B., and Nelson, K. (1991). Meta-analysis of interactive video instruction: a 10 year review of achievement effects. *Journal of Computer-Based Instruction*, 18, 1-6. Means, B., and Olson, K. (1995). Technology's role within constructivist classrooms. Paper presented at the Annual Meeting of the American Education Research Association (San Francisco, CA, April 18-22). Miall, A.D. (1995, December). How do you surf a swamp? *CAUT Bulletin*, p. 10. Miller, H. (1997). The New York city public schools integrated learning systems project: evaluation and meta-evaluation. *International Journal of Educational Research*, 27, 91-183. Nastasi, B., and Clements, D. (1993). Motivational and social outcomes of cooperative computer education environments. *Journal of Computing in Childhood Education*, 4, 15-43. Navassardian, S., Marinov, M., and Pavlova, R. (1995). Investigations on the quality and efficiency of instructive computer-aided training. *British Journal of Educational Technology*, 26, 109-121. *New Internationalist* (1992). The discovery of poverty. June, 7-9. Nichols, L. (1996). Pencil and paper versus word processing: a comparative study of creative writing in the elementary school. *Journal of Research on Computing in Education*, 29, 159-166. Nicol, J.M., and Butler, S. (1996). Promise and fulfillment: the use of computers in B.C. elementary schools. *Education Canada*, 36, 22-28. Norton, P., and Resta, V. (1986). Investigating the impact of computer instruction on elementary students' reading achievement. *Educational Technology*, 26, 35-41. Oliver, R., and Oliver, H. (1996). Information access and retrieval with hypermedia information systems. *British Journal of Educational Technology*, 27, 33-44. Oppenheimer, T. The computer delusion. *The Atlantic Monthly*, 280, 45-63. Owston, R., and Wideman, H. (1997). Word processors and childrens' writing in a high computer-access setting. *Journal of Research on Computing in Education*, 30, 202-220. Palmer, S. (1993). Does computer use put children's vision at risk? *Journal of Research and Development in Education*, 26, 59-65. Papert, S. (1980). *Mindstorms: Children, Computers, and Powerful Ideas*. New York: Basic Books. Peck, J., and Hughes, S. (1997). So much success...from a first-grade database project! *Computers in the Schools*, 13, 109-116. Pence, H. (1995-1996). A report from the barricades of the multimedia revolution. *Journal of Educational Technology Systems*, 24, 159-164. Perlmutter, M., Behrend, S.D., Kuo, F., and Muller, A. (1989). Social influences on children's problem solving. *Developmental Psychology*, 25, 744-754. Post, P. (1987). The effect of field independence/field dependence on computer-assisted instruction achievement. *Journal of Industrial Teacher Education*, 25, 60-67. Repman, J. (1993). Collaborative, computer-based learning: cognitive and affective outcomes. *Journal of Educational Computing Research*, 9, 149-163. Resnick, M. (1998). Technologies for lifelong kindergarten. *Educational Technology Research and Development*, 46, 43-55. Rice, G.E. (1994). Examining constructs in reading comprehension using two presentation modes: paper vs. Computer. *Journal of Educational Computing Research*, 11, 153-178. Richey, E. Urban success stories. *Educational Leadership*, 25, 55-57. Riddle, E. (1995). *Communication Through Multimedia in an Elementary Classroom*. ERIC ED384346. Rieber, L.P. (1990). Using computer animated graphics in science instruction with children. *Journal of Educational Psychology, 82*, 135-140. Roberts, G.I., and Samuels, M.T. (1993). Handwriting remediation: a comparison of computer-based and traditional approaches. *Journal of Educational Research, 87*, 39-46. Robertson, H-J. (1998). *No More Teachers, No More Books*. Toronto: McClelland & Stewart Inc. Ross, E.W. (1991). Microcomputer use in secondary social studies classrooms. *Journal of Educational Research, 85*, 39-46. Roszak, T. (1994). *The Cult of Information: A Neo-Luddite Treatise on High-Tech Artificial Intelligence, and the True Art of Thinking*. Berkeley: University of California Press. Rushkoff, D. (1996). *Playing the Future: How Kids' Culture Can Teach Us to Thrive in an Age of Chaos*. New York: HarperCollins. Ryser, G.R., Beeler, J.E., and McKenzie, C.M. (1995). Effects of a computer-supported intentional learning environment (CSILE) on students' self concept, self regulatory behaviour, and critical thinking ability. *Journal of Educational Computing Research, 13*, 375-385. Sanders, B. (1994). *A is for Ox*. New York: Vintage Books. Schumacker, R.E., Young, J.I., and Bembry, K.L. (1995). Math attitudes and achievement of Algebra I students: a comparative study of computer-assisted and traditional lecture methods of instruction. *Computers in the Schools, 11*, 27-33. Seltzer, R. (1995). Picture Power. *Internet World, 6*, 84-85. Shade, D., Nida, R., Lipinski, J., and Watson, J. (1986). Microcomputers and preschoolers working together in a classroom setting. *Computers in the Schools, 3*, 53-61. Sharon, D. (1995). Teaching with video programs: from closed to open use. *Canadian Journal of Educational Communication, 24*, 185-207. Shenouda, W. and Wolfe, V. (1996). Integrating computer assisted instruction with the teaching of language. *Journal of Educational Technology Systems, 24*, 189-194. Shiah, R. (1995). The effects of computer-assisted instruction on the mathematical problem solving of students with learning disabilities. *Exceptionality*, 5, 131-161. Sieglinde, B. (1993). Is CAI spelling drill more effective than traditional practice? *Journal of the Computer-Using Educators of British Columbia*, 12, 21-26. Sivin-Kachala, J., and Bialo, E. (1995). *Report on the Effectiveness of Technology in Schools 1990-1994*. Washington, D.C.: Software Publishers Association. <http://www.readingonline.org/research/impact/index.html#Sci> Smith, G. (1994). A map for the global voyage. *Global Education*, June. Smith, T. (1998). *The Myth of Green Marketing: Tending our Goats at the Edge of Apocalypse*. Toronto: University of Toronto Press. Snider, R.C. (1992). The machine in the classroom. *Phi Delta Kappan*, 4, 316-324. Stoll, C. (1995). *Silicon Snake Oil*. New York: Doubleday. Talbott, S.L. (1995). *The Future Does Not Compute: Transcending the Machines in Our Midst*. Sebastopol: O'Reilly & Associates, Inc. Taylor, C. (1991). *The Malaise of Modernity*. Concord: Anansi. Tergen, S. (1997). Conceptual and methodological shortcomings in hypertext/hypermedia design and research. *Journal of Educational Computing Research*, 16, 209-235. Tierney, R., Kieffer, R., Whalin, K., Desai, L.E., Moss, A., Harris, E., and Hopper, J. (1999). Assessing the impact of hypertext on learners' architecture of literacy learning spaces in different disciplines: follow-up studies. <http://www.readingonline.org/research/impact/index.html#Sci> Tierney R., Kieffer, R., Stowell, L., Desai, L. Whalin, K., and Moss, A. (1992). Computer acquisition: a longitudinal study of the influence of high computer access on students' thinking, learning, and interaction. *Apple Classrooms of Tomorrow Report No. 16*. Cupertino, CA: Apple Computer. Van Dusen, L., Lani, M., and Worthen, B. (1995). Can integrated instructional technology transform the classroom? *Educational Leadership*, 53, 28-33. Vygotsky, L. (1978). *Mind in Society*. Cambridge: Harvard University Press. Walker, S. (1996, January 11). *Calgary Herald*, p. A18. Walsh, T.E. (1994). A literature review. *Journal of Research on Computing in Education*, **26**, 322-333. Wang, Y. and Garigliano, R. (1993). Empirical studies and intelligent language tutoring. *Seventh International PEG Conference* (ERIC ABS. 83-09597). Whalley, P. (1995). Imagining with multimedia. *British Journal of Educational Technology*, **26**, 190-204. White, C. (1987). Developing information-processing skills through structured activities with a computerized file-management program. *Journal of Educational Computing Research*, **3**, 355-375. White, J. (1997, April 6). *Calgary Herald*, p. A9. Wiburg, K. (1995, February). Integrated learning systems: what does the research say? *The Computing Teacher*, 7-10. Wiebe, J.H., and Martin, N.J. (1994). The impact of a computer-based adventure game on achievement and attitudes in geography. *Journal of Computing in Childhood Education*, **5**, 61-71. Wiersma, W. (1995). *Research Methods in Education*. Boston: Allyn and Bacon. Wild, M. (1996). Technology refusal: rationalizing the failure of student and beginning teachers to use computers. *British Journal of Educational Technology*, **27**, 134-143. Wills, S. (1994). Beyond Browsing: Making Interactive Multimedia Interactive, in Rethinking the Role of Education. In the Technological Age, EdTech94 Conference, Singapore, 58-68. Wilson, T.F. (1995). High tech high: cruising on the internet. *NASSP Bulletin*, **79**, 84-89. Yager, R., Blunck, S., and Nelson, E. (1993). The use of computers to enhance science instruction in pre-school and K-3 classrooms. *Journal of Computing in Childhood Education*, **4**, 125-136. Yang, Y. (1991-1992). The effects of media on motivation and content recall: comparison of computer and print based instruction. *Journal of Educational Technology Systems*, **20**, 95-105. Zachariah, M. (1992). Linking multicultural and development education to promote respect for persons and cultures: a Canadian perspective in: (Lynch, J., Modgil, C., and Modgil, S., Eds.). *Cultural Diversity and the Schools, Volume Four: Human Rights, Education and Global Responsibilities*. London: The Falmer Press. APPENDIX I INTERVIEW FORMAT 1. In what ways have you and your students been using computers in the classroom? 2. What do you see as the positive and negative impacts of having computers in school? How are they helpful or detrimental in teaching the curriculum? 3. Please discuss the kinds of professional development activities that you have been involved in focusing on computer use. In what ways could this be improved? 4. A growing number of people are cautioning against the mass introduction and use of computers in schools; for example, that: - computers detract from intuitive development, social interactions and cooperative learning situations - computers carry with them a hidden political message (i.e., “quick fix” mentality; competition vs. Cooperation in the global marketplace) - objectionable material is immediately available to children through the Internet and World Wide Web Do you agree? If so, elaborate. If not, why? 5. What do you feel are the most pressing issues surrounding school reform in relation to the extensive use of computers and the Internet? 6. Describe your understanding of Global Learning Networks and their inherent pedagogy. 7. What are your thoughts concerning if or when (i.e., what grade level) social justice issues (e.g., human rights violations, child labour, "should" be raised in classrooms? 8. In light of global concerns (including such issues as environmental protection, human rights, equity, violence, democracy), how do you see Global Learning Networks as holding potential to enhance understanding of cultural diversity, social justice, etc.? 9. Have you considered using the Internet/Web to investigate with your students global issues? Now? Next year? Ever? Reasons? 10. Would you personally use Global Learning Networks in your classroom? How much extra time do you anticipate this would take (accumulating background information, preparing students, etc.)? Could you realistically handle such a project given your present workload? 11. If a package of information concerning Global Learning Networks was available to you, would you be likely to undertake a Global Learning project in your classroom? 12. How would you describe the school culture within this school? Supportive of, or against extensive use of technology? APPENDIX II NOTICE OF CONSENT FORMS Consent Form This confirms the consent of ____________________ to participate in the research project titled, “Investigating Computers in the Classroom: Focusing on Transformative Pedagogy Through Global Learning Networks”, conducted by Beverly Mathison under the supervision of Dr. Mathew Zachariah, in the Department of Education, University of Calgary. The purpose of the study is explorative: to discover whether the teaching staff and administration within one Calgary school are receptive to introducing global issues (through employing a transformative pedagogy) into present program/curriculum plans. I have been informed, to an appropriate level of understanding, about the purpose and methodology of this research project, the nature of my involvement, and any possible risks to which I may be exposed by virtue of my participation. I agree to participate in this project by doing the following: - responding to interview questions - volunteering approximately one hour of my time I understand and agree that: - My participation is voluntary and I have the right to withdraw from this research at any time without penalty - The researcher has a corresponding right to terminate my participation in this research at any time - Participation or non participation will have no effect on my position within my agency - All data will be kept in a secure place inaccessible to others - I will be given the opportunity to listen to the audio tapes, and before any public presentation, I will be given the opportunity to correct, change, or add what I think is important - Disposition of data will be carried out in the following manner: - audio tapes will be erased upon project completion - Confidentiality will be assured in the following manner: - all information derived from participants will be kept in a secure place inaccessible to others • no names will be used in the written summary • Anonymity will be assured in the following manner: • participant responses will be presented in aggregate form (i.e., individual utterances will not be quoted with participants’ names) • Data will be: • coded in such a way that I will be identified • numbers will replace names/school sites, etc. • Data will be presented in the following form: • personal referred to by position only • The risks involved in participating in this study include: • no greater risks than those ordinarily incurred in daily life, classroom life, etc. • being identified through comments • Steps taken to reduce risks (such as psychological/emotional stress) • encouragement to withdraw from study I understand that it may be desirable, for comparative purposes, to repeat this research on another site or to use the findings from this research for comparison with related existing research. I understand that any subsequent use of the findings from this research will conform to the above parameters. I understand that the results of this research will be sued for publication, presentation to scientific groups, etc. Any concerns associated with this research should be directed to Barbea Flath, Principal, Hawkwood School or Dr. Janelle Holmes, Supervisor, Accountability Services, EMAIL: Jholmes, Fax 777-8860, telephone 294-6325. I do not object to this additional use of the research data, and give Beverly Mathison permission to present findings at conferences or publish results on the basis of this work while protecting my anonymity. A duplicate copy of the signed consent form is being provided for my records. I have read the consent form and I understand the nature of my involvement. I agree to participate within the above stated parameters. Name________________________ Signature of participant________________________ Date________________________ APPENDIX III INFORMATION AND COMMUNICATION TECHNOLOGY DOCUMENT These 2Learn.ca resource pages can assist teachers in referencing the Alberta Learning database of Information and Communication Technology, Kindergarten to Grade 12: An Interim Program of Studies Outcomes while using or building 2Learn.ca resources. The Outcomes are organized here in a printable format, by Category and by Division. The Information and Communication Technology Program of Studies is intended to be integrated within all subject areas and provide learners with the necessary knowledge, skills, and attitudes to use technology effectively, efficiently, and ethically. Activities, projects, and problems that replicate real-life situations through the use of technology tools provide rich and authentic learning opportunities for all students while meeting curricular goals. **Outcomes by Category** - Foundational Operations, Knowledge, and Concepts - Processes for Productivity - Communicating, Inquiring, Decision Making, and Problem Solving **Outcomes by Division** - Division 1 - Division 2 - Division 3 - Division 4 Original Source: Alberta Learning website http://ednet.edc.gov.ab.ca/techoutcomes/ F1. Students will demonstrate an understanding of the nature of technology. Division 1 1.1 identify techniques and tools for communicating, storing, retrieving and selecting information 1.2 apply terminology appropriate to the technologies being used at this division level 1.3 demonstrate an understanding that the user manages and controls the outcomes of technology Division 2 2.1 apply terminology appropriate to the technologies being used at this division level 2.2 identify and apply techniques and tools for communicating, storing, retrieving and selecting information 2.3 explain the advantages and limitations of using computers to store, organize, retrieve and select information 2.4 recognize the potential for human error when using technology Division 3 3.1 demonstrate an understanding that information can be transmitted through a variety of media 3.2 explain the concept of software and hardware compatibility 3.3 apply terminology appropriate to the technology being used at this division level 3.4 demonstrate an understanding that digital technology follows a logical order of operations 3.5 explain the difference between digital and analog data on communication systems 3.6 explain how the need for global communication will affect technology around the world 3.7 demonstrate the ability to troubleshoot technical problems 3.8 demonstrate an understanding that technology is a process, technique applied to solve human activity Division 4 4.1 assess the strengths and weaknesses of computer simulations in relation to real-world problems 4.2 solve scientific and mathematical problems by selecting appropriate technology to perform experiments and calculations 4.3 apply terminology appropriate to technology in all forms of communication 4.4 demonstrate an understanding of the general concepts of computer programming and the algorithms that enable technological devices to perform operations and solve problems F2. Students will understand the role of technology as it applies to self, work and society. Division 1 1.1 identify technologies used in everyday life 1.2 describe particular technologies being used for specific purposes Division 2 2.1 identify how technological developments influence his or her life 2.2 identify the role technology plays in a variety of careers 2.3 examine the environmental issues related to the use of technology 2.4 assess the personal significance of having limitless access to information provided by communication networks such as the Internet 2.5 describe, using examples, how communication and information networks such as the telephone and the Internet create a global community Division 3 3.1 describe the impact of communication technologies on past, present and future workplaces, lifestyles and the environment 3.2 identify potential technology-related career paths 3.3 identify the cultural impact of global communication 3.4 evaluate the driving forces behind various technological inventions 3.5 make inferences regarding future trends in the development and impact of communication technologies 3.6 explain ways in which technology can assist in the monitoring of local and global environmental conditions 3.7 analyze and assess the impact on society of having limitless access to information 3.8 identify the manner in which telecommunications technology affects time and distance Division 4 4.1 use technology outside formal classroom settings 4.2 analyze how technological innovations and creativity affect the economy 4.3 demonstrate an understanding of new and emerging communication systems 4.4 evaluate possible potential for emerging technologies 4.5 demonstrate conservation measures when using technology 4.6 demonstrate the consumer knowledge necessary to make purchases such as a computer, modem, VCR and video camera 4.7 use current, reliable information sources from around the world 4.8 analyze and assess the impact of technology on the global community F3. Students will demonstrate a moral and ethical approach to the use of technology. Division 1 1.1 demonstrate courtesy and follow classroom procedures when making appropriate use of computer technologies 1.2 work collaboratively to share limited resources 1.3 demonstrate appropriate care of technology equipment 1.4 recognize and acknowledge the ownership of electronic material 1.5 use appropriate communication etiquette Division 2 2.1 comply with the acceptable use policy of the school and district for Internet and networked services, including software licensing agreements 2.2 work collaboratively to share limited resources 2.3 use appropriate communication language and etiquette 2.4 document sources obtained electronically such as Web site addresses 2.5 respect the privacy and products of others 2.6 use electronic networks in an ethical manner 2.7 comply with copyright legislation Division 3 3.1 use time and resources on the network wisely 3.2 explain the issues involved in balancing the right to access information with the right to personal privacy 3.3 understand the need for copyright legislation 3.4 cite sources when using copyright and/or public domain material 3.5 download and transmit only materials that comply with the established network use policies and practices 3.6 model and assume personal responsibility for ethical behaviour and attitudes and acceptable use of information technologies and sources in local and global contexts Division 4 4.1 demonstrate an understanding of how changes in technology can benefit or harm society 4.2 record relevant data for acknowledging sources of information and cite sources correctly 4.3 respect ownership and integrity of information F4. Students will become discerning consumers of mass media and electronic information Division 1 1.1 compare similar types of information from two different electronic sources Division 2 2.1 recognize that graphics, video and sound enhance communication 2.2 describe how the use of various texts and graphics can alter perception 2.3 discuss how technology can be used to create special effects and/or to manipulate intent through the use of images and sound Division 3 3.1 identify aspects of style in a presentation 3.2 understand the nature of various media and how they are consciously used to influence an audience 3.3 identify specific techniques used by the media to elicit particular responses from an audience 3.4 recognize that the ability of technology to manipulate images and sound can alter the meaning of a communication Division 4 4.1 discriminate between style and content in a presentation 4.2 evaluate the influence and results of digital manipulation on our perceptions 4.3 identify and analyze a variety of factors that affect the authenticity of information derived from mass media and electronic communication F5. Students will practice the concepts of ergonomics and safety when using technology I.1 demonstrate proper posture when using a computer I.2 demonstrate safe behaviours when using technology Division 2 2.1 demonstrate the application of ergonomics to promote personal health and well-being 2.2 identify and apply safety procedures required for the technology being used Division 3 3.1 identify risks to health and safety that result from improper use of technology 3.2 identify and apply safety procedures required for the technology being used Division 4 4.1 assess new physical environments with respect to ergonomics 4.2 identify safety regulations specific to the technology being used F6. Students will demonstrate a basic understanding of the operating skills required in a variety of technologies. Division 1 1.1 perform basic computer operations (which may vary by environment), including powering up, inserting disks, moving the cursor, clicking on an icon, using pull-down menus, executing programs, saving files, retrieving files, printing, ejecting disks and powering down 1.2 use keyboarding techniques for the home row, enter, space bar, tab, backspace, delete and insertion-point arrow keys 1.3 operate basic audio and video equipment, including inserting, playing, recording and ejecting media Division 2 2.1 power up and power down various technologies and peripherals correctly 2.2 use and organize files and directories 2.3 use peripherals, including printers and scanners 2.4 use appropriate keyboarding techniques for the alphabetic and punctuation keys Division 3 3.1 connect and use audio, video and digital equipment 3.2 perform routine data maintenance and management of personal files 3.3 demonstrate proficiency in uploading and downloading text, image, audio and video files 3.4 demonstrate the ability to electronically control devices 3.5 describe the steps involved in loading software 3.6 identify and apply safety procedures, including anti-virus scans and virus checks, to maintain data integrity Division 4 4.1 continue to demonstrate the learner outcomes addressed within the previous divisions. Students interested in pursuing advanced study in areas such as electronics, programming, CADD, robotics and other industrial applications of technology will find opportunities in CTS modules. P. Processes for Productivity P1. Students will compose, revise and edit text. Division 1 1.1 create original text, using word processing software, to communicate and demonstrate understanding of forms and techniques 1.2 edit complete sentences, using such features of a word processor as cut, copy and paste Division 2 2.1 create and revise original text to communicate and demonstrate understanding of forms and techniques 2.2 edit and format text to clarify and enhance meaning, using such wordprocessing features as the thesaurus, find/change, text alignment, font size and style 2.3 convert digital text files by opening and saving them as different file types Division 3 3.1 design a document, using style sheets and with attention to page layout, that incorporates advanced word-processing techniques, including: headers, footers, margins, columns, table of contents, bibliography and index 3.2 use advanced menu features within a word processor to accomplish a task; for example, insert a table, graph or text from another document 3.3 revise text documents based on feedback from others 3.4 use appropriate communication technology to elicit feedback from others Division 4 4.1 continue to demonstrate the learner outcomes achieved in prior grades and course subjects. P2. Students will organize and manipulate data. Division 1 1.1 read information from a prepared database Division 2 2.1 enter and manipulate data by using such tools as a spreadsheet or database for a specific purpose 2.2 display data electronically through graphs and charts Division 3 3.1 design, create and modify a database for a specific purpose 3.2 design, create and modify a spreadsheet for a specific purpose, using functions such as: SUM, PRODUCT, QUOTIENT, and AVERAGE 3.3 use a variety of technological graphing tools to draw graphs for data involving one or two variables 3.4 use a scientific calculator or a computer to solve problems involving rational numbers Division 4 4.1 manipulate and present data through the selection of appropriate tools, such as scientific instrumentation, calculators, databases and/or spreadsheets 4.2 use programming tools such as macros, scripts and applets to modify or control a technological device P3. Students will communicate through multimedia. Division 1 1.1 access images, such as clip art, to support communication 1.2 create visual images by using such tools as paint and draw programs for particular audiences and purposes 1.3 access sound clips or recorded voice to support communication Division 2 2.1 create a multimedia presentation, incorporating features such as visual images (clip art, video clips), sounds (live recordings, sound clips) and animated images, appropriate to a variety of audiences and purposes 2.2 access available databases for images to support communication Division 3 3.1 create multimedia presentations that take into account audiences of diverse size, age, gender, ethnicity and geographic location 3.2 create multimedia presentations that incorporate meaningful graphics, audio, video and text gathered from remote sources Division 4 4.1 select and use, independently, multimedia capabilities for presentations in various subject areas 4.2 support communication with appropriate images, sounds and music 4.3 apply general principles of graphic layout and design to a document in process P4. Students will integrate various applications. Division 1 1.1 integrate text and graphics to form a meaningful message 1.2 balance text and graphics for visual effect Division 2 2.1 integrate a spreadsheet, or graphs generated by a spreadsheet, into a text document 2.2 vary font style and size, and placement of text and graphics, in order to create a certain visual effect Division 3 3.1 integrate information from a database into a text document 3.2 integrate database reports into a text document 3.3 emphasize information, using placement and colour Division 4 4.1 integrate a variety of visual and audio information into a document to create a message targeted for a specific audience 4.2 apply principles of graphic design to enhance meaning and audience appeal 4.3 use integrated software effectively and efficiently to reproduce work that incorporates data, graphics and text P5. Students will navigate and create hyperlinked resources. Division 1 1.1 navigate within a document, compact disc or other software program that contains links 1.2 access hyperlinked sites on an intranet or the Internet Division 2 2.1 create and navigate a multiple-link document 2.2 navigate through a document that contains links to locate, copy and then paste data in a new file 2.3 navigate the Internet with appropriate software Division 3 3.1 create a multiple-link web page 3.2 demonstrate proficient use of various information retrieval technologies Division 4 4.1 create multiple-link documents appropriate to the content of a particular topic 4.2 post multiple-link pages on the World Wide Web or on a local or wide area network P6. Students will use communication technology to interact with others. Division 1 1.1 compose a message that can be sent through communication technology 1.2 communicate electronically with people outside the classroom Division 2 2.1 select and use the technology appropriate to a given communication situation Division 3 3.1 communicate with a targeted audience, within a controlled environment, by using communication technologies such as newsgroups and web browsers 3.2 demonstrate proficiency in accessing local area network, wide area network and Internet services, including uploading and downloading text, image, audio and video files Division 4 4.1 select and use the appropriate technologies to communicate effectively with a targeted audience C1. Students will access and use information from a variety of technologies. Division 1 1.1 access and retrieve appropriate information from electronic sources for a specific inquiry 1.2 process information from more than one source to retell what has been discovered Division 2 2.1 access and retrieve appropriate information from the Internet by using a specific search path or given uniform resource locations (URLs) 2.2 organize information gathered from the Internet or an electronic source by selecting and recording the data in logical files or categories Division 3 3.1 plan and conduct a search, using a wide variety of electronic sources 3.2 refine searches to limit sources to a manageable number 3.3 access and operate multimedia applications and technologies from stand-alone and online sources 3.4 access and retrieve information through the electronic network 3.5 analyze and synthesize information to create a product Division 4 4.1 plan and perform complex searches using more than one electronic source 4.2 select information from appropriate sources, including primary and secondary sources 4.3 evaluate and explain the advantages and disadvantages of various search strategies C2. Students will seek alternative viewpoints using information technologies. Division 1 (none currently) Division 2 2.1 seek responses to inquiries from various authorities through electronic media Division 3 3.1 access diverse viewpoints on particular topics by using appropriate technologies 3.2 assemble and organize different viewpoints in order to assess their validity 3.3 use information technology to find facts that support or refute diverse viewpoints Division 4 4.1 consult a wide variety of sources that reflect varied viewpoints on particular topics 4.2 evaluate the validity of gathered viewpoints against other sources C3. Students will critically assess information accessed through the use of a variety of technologies. Division 1 1.1 compare and contrast information from similar types of electronic sources Division 2 2.1 identify and distinguish points of view expressed in electronic sources on a particular topic 2.2 recognize that information serves different purposes and that data from electronic sources may need to be verified to determine accuracy or relevance for the purpose used Division 3 3.1 evaluate the authority and reliability of electronic sources 3.2 evaluate the relevance of electronically accessed information to a particular topic Division 4 4.1 assess the authority, reliability and validity of electronically accessed information 4.2 demonstrate discriminatory selection of electronically accessed information that is relevant to a particular topic C4. Students will use organizational processes and tools to manage inquiry. Division 1 1.1 follow a plan to complete an inquiry 1.2 formulate new questions as research progresses 1.3 organize information from more than one source Division 2 2.1 design and follow a plan, including a schedule, to be used during an inquiry process, and make revisions to the plan as necessary 2.2 organize information, using such tools as a database, spreadsheet or electronic webbing 2.3 reflect on and describe the processes involved in completing a project **Division 3** 3.1 create a plan for an inquiry that includes consideration of time management 3.2 develop a process to manage volumes of information that can be available through electronic sources 3.3 demonstrate the advanced search skills necessary to limit the number of hits desired for online and offline databases; for example, the use of "and" or "or" between search topics and the choice of appropriate search engines for the topic **Division 4** 4.1 use calendars, time management or project management software to assist in conducting an inquiry --- **C5. Students will use technology to aid collaboration during inquiry.** **Division 1** 1.1 share information collected from electronic sources to add to a group task **Division 2** 2.1 retrieve data from available storage devices such as a shared folder to which a group has contributed 2.2 record group brainstorming, planning and sharing of ideas by using technology 2.3 extend the scope of a project beyond classroom collaboration by using communication technologies, such as the telephone and e-mail **Division 3** 3.1 access, retrieve and share information from electronic sources such as common files 3.2 use networks to brainstorm, plan and share ideas with group members Division 4 4.1 use telecommunications to pose critical questions to experts 4.2 participate in a variety of electronic group formats C6. Students will use technology to investigate and/or solve problems. Division 1 1.1 identify a problem within a defined context 1.2 use technology to organize and display data in a problem-solving context 1.3 use technology to support and present conclusions Division 2 2.1 select and use technology to assist in problem solving 2.2 use data gathered from a variety of electronic sources to address identified problems 2.3 use graphic organizers, such as mind mapping/webbing, flow charting and outlining, to present connections among ideas and information in a problem-solving environment 2.4 solve problems using numerical operations and tools such as calculators and spreadsheets 2.5 solve problems requiring the sorting, organizing, classifying and extending of data using tools such as calculators, spreadsheets, databases or hypertext technology 2.6 solve issue-related problems using communication tools such as a word processor or e-mail to involve others in the process 2.7 generate alternative solutions to problems by using technology to facilitate the process Division 3 3.1 articulate clearly a plan of action to use technology to solve a problem 3.2 identify the appropriate materials and tools to use in order to accomplish a plan of action 3.3 evaluate choices and the progress in problem solving, then redefine the plan of action as appropriate 3.4 pose and test solutions to problems by using computer applications such as computer-assisted design or simulation/modelling software 3.5 create a simulation or a model by using technology that permits the making of inferences Division 4 4.1 investigate and solve problems of prediction, calculation and inference 4.2 investigate and solve problems of organization and manipulation of information 4.3 manipulate data by using charting and graphing technologies in order to test inferences and probabilities 4.4 generate new understandings of problematic situations by using some form of technology to facilitate the process 4.5 use programming tools such as macros, scripts and applets to modify or control a technological device 4.6 evaluate the appropriateness of the technology used to investigate or solve a problem C7. Students will use electronic research techniques to construct personal knowledge and meaning. Division 1 1.1 develop questions that reflect a personal information need 1.2 summarize data by picking key words from gathered information and by using jottings, point form or retelling 1.3 draw conclusions from organized information 1.4 make predictions based on organized information Division 2 2.1 use a variety of technologies to organize and synthesize researched information 2.2 use selected presentation tools to demonstrate connections among various pieces of information Division 3 3.1 identify patterns in organized information 3.2 make connections among related, organized data, and assemble various pieces into a unified message 4.1 use appropriate strategies to locate information to meet personal needs 4.2 analyze and synthesize information to determine patterns and links among ideas 4.3 use appropriate presentation software to demonstrate personal understandings APPENDIX IV QUALITY LEARNING DOCUMENT Quality Learning Work in Progress CALGARY BOARD OF EDUCATION Purpose The purpose of this document is to provide a System Statement on Quality Learning for use and guidance within the Calgary Board of Education. This paper provides an outline of our best knowledge about quality learning at this time. Although the primary focus is the teacher and the student in the learning environment, we believe everyone in our Collaborative Community of Learners is both a teacher and a learner. Our work is meant to generate meaningful dialogue in varied settings within each school, our system and the community. By developing shared understandings of quality learning, we are better able to create learning conditions and to identify indicators of growth. These critical elements lead to significant learning outcomes. All students are entitled to a quality education. To achieve this, it is critical that we foster a shared understanding of what quality learning is and how it is actualized within the Calgary Board of Education. Acknowledgment Over a number of years, groups such as the Elementary Program Review, the Chief Superintendent’s Commission on Literacy, the first Quality Learning inquiry group and the Elementary and Secondary Panels initiated a framework for examining quality learning. We wish to acknowledge all of the educators who have contributed so richly to the thinking that has shaped this document. System Task Force for Quality Learning Rosemary Allan, Teacher, Falconridge Elementary School Kim Anderson, Curriculum Specialist, Collaborative Learning Community 2 Karen Bird, Assistant Principal, Rideau Park School Kathleen Jones, Curriculum Leader, Lester B. Pearson High School Dr. Marie Keenan, Principal, James Short Memorial School Jim Langley, Assistant Principal, Bishop Pinkerton Junior High School Jacqueline Lessard, Principal, Bob Edwards Junior High School Karen McDaniel, Teacher, Fairview Junior High School Margi Molyneux, Resource Teacher, Crescent Heights High School Dr. Cheryl Oishi, Assistant Principal, William Roper Hull School Lori Pamplin, Assistant Principal, Earl Grey Elementary School John Rollins, Principal, Central Memorial High School Design and Layout: Media Production, Communication and Learning Technologies. Calgary Board of Education Statement of Purpose The Calgary Board of Education, as a public education system, ensures that quality learning is accessible to all students. The system supports, nurtures and connects the work of teachers, parents and families. The Board's governance exhibits wisdom, courage, foresight and shared leadership so that time, talent and resources are used in the best possible way. The Board promotes staff, parent and community commitment to an efficient and effective learning organization. The Board fosters a climate which is visionary, reflective, collaborative and responsive to change. The Board acts as an advocate for every student to have an equal opportunity to become a competent, productive and self-directed citizen. The Board acts as an advocate for every school to have the resources to assist all its students to be the best that they can be. The Board shares information about its work and the system's performance with Calgarians. The knowledge, skills and attitudes of its students, and their commitment to lifelong learning are the primary measures of the Calgary Board of Education's effectiveness. Board of Trustees January 1998 Students are the heart of the learning community. The model illustrates the human interaction in our learning community that supports, promotes and advances quality learning. The circles around the students represent adult support of student learning. The respectful relationships among these constituents enable a steady, central and constant focus on student entitlement and achievement. Dr. D. Michaels Excerpt from: School System Opening Address September 10, 1997 ©1998 Calgary Board of Education I believe that we have only just begun the process of discovering and inventing the new organizational forms that will inhabit the twenty-first century. To be responsible inventors and discoverers, though, we need the courage to let go of the old world, to relinquish most of what we have cherished, to abandon our interpretations about what does and doesn't work. As Einstein is often quoted as saying: No problem can be solved from the same consciousness that created it. We must learn to see the world anew. -Margaret Wheatley, 1992 Margaret Wheatley's comment draws our attention to a growing societal need to accommodate rapid change. The Calgary Board of Education recognizes that old ways of doing business, old ways of approaching learning and teaching, will no longer work in a world that requires fundamentally different ways of thinking. The Calgary Board of Education and Alberta Education are united in their commitment to quality learning. There are common understandings, conditions, indicators and outcomes that provide a framework for learning. They are the primary organizers for this quality learning document. Quality learning is at the heart of the educational process. Acknowledging that teaching and learning occur across a broad spectrum of contexts, it is a compelling fact that certain classroom and school conditions create the best possible opportunities for learning and achievement. Teaching practices designed to engage learners and foster independent thinking will prepare students for an increasingly competitive and complex world that requires different kinds of competencies and attitudes. The classroom is changing. The best knowledge about learning and teaching must guide teaching practice. The fundamental commitment of the Calgary Board of Education is to serve the learning needs of the individual. This goal is best achieved in an organization that takes on the attributes and qualities of a learning community, an interconnected system of people and services where collaboration and collegiality are paramount. Ideal quality learning experiences will prepare students to be responsible, contributing members of society. Each learner is unique. Collectively, learners present a rich diversity of experience. Our students learn in different ways and at different rates. Each of them comes to school with prior knowledge that is the foundation for all future learning. It is the school's obligation to create an organizational framework that enhances student-teacher relationships and is grounded in expertise, knowledge and experience about learning and teaching. It will be the quality of our relationships and the quality of our individual and collective spirit that will serve us best as we look for ways to think and learn together and to serve the learning needs of our students. Increasingly, our school populations are becoming more diverse, and such diversity is reflected in a broad spectrum of individual student differences: knowledge, skill, attitude, language, culture, race and religion. The classroom and each school should model a sense of community that values diversity. Teaching practices should be informed by knowledge and understanding that support an inclusive view of education. Ultimately, quality learning is more than preparation for life. It is life. The years that students spend in our schools will give them a template for lifelong learning, a foundation for their future. Even as we maintain traditional values of scholarship and citizenship in our schools, we will move forward with new ideas, improved organizational structures, enlightened notions of teaching, learning and leadership. Never before has there been a greater need for a thorough, ongoing examination of practice and professionalism. The Calgary Board of Education possesses a wealth of knowledge, expertise and commitment. Its long history of demonstrated excellence and service to public education will serve learning and teaching well as we approach the 21st century. Quality learning is an interactive holistic process between learners and their environment. Thoughtful learners are eager to take risks and are able to reflect critically on their progress. They realize the importance of setting goals individually and in partnership with others. "Optimism generated by a world view of interdependence and dynamic growth is reflected in the vision of students working together in common purpose, cooperating for mutual learning, and challenging one another to higher creativity" (Langford & Cleary, 1995). The integration of technology supports this interactive process. Technology is a powerful resource, which enhances and supports quality learning. As a learning tool technology facilitates communication and collaboration. Technology can help to bring the resources of the world community to the student by extending learning beyond the walls of the classroom. In a quality learning environment, learners engage in purposeful work and construct meaning in a social context. Through higher order thinking, learners acquire deep understandings. Ideally, they develop a passion for learning. Learners accept responsibility and ownership for learning and behaviour based on clear expectations and desired outcomes. There is a commitment to broad-based assessment. Assessment is an integral part of the teaching and learning process. We are all learners. As learners, each of us is unique and brings to the learning environment diverse values, beliefs and experiences. Relationships among learners involve care, respect, trust and openness. Differences are valued "by bringing different perspectives together in the spirit of mutual respect" (Covey, 1989). Building a quality learning environment requires energy and commitment. Educators need to be lifelong learners. As such, teachers are professionally responsible and accountable for creating the conditions for quality learning. While instructional strategies may vary, it is essential that current learning theory guides our practice. As teachers develop their practice and their individual professional development plans, they need to consider provincial outcomes, the Calgary Board of Education statements of vision, mission, purpose and beliefs as well as school and department improvement plans. The outcomes of quality learning are clearly defined in the Calgary Board of Education Expectations for Student Performance. It is expected that students will acquire these competencies and attributes with the expertise of educators and with the support of families, communities and the school system. Critical Elements Understandings, conditions, indicators and outcomes are elements essential to quality learning. Based on current educational research and classroom experience, five key understandings have been identified to guide our thinking about quality learning. Specific conditions and indicators for classroom practice are detailed on pages 10 - 14. The outcomes of quality learning are global in nature and are outlined on page 15. A diagrammatic summary of the quality learning process is presented on pages 8 and 9. **Understandings** *What understandings are critical to quality learning?* - Learning requires purposeful involvement. - Interpersonal relationships are essential to the learning process. - Knowledge is constructed within a climate of inquiry. - Clear expectations and relevant feedback are needed. - Diversity is valued within a responsive environment. **Conditions** *What is necessary for quality learning?* Individuals within a collaborative learning community, create conditions to foster each of the five understandings. **Indicators** *How can quality learning be recognized?* Indicators are examples of behaviours that show individuals are taking responsibility for their role within a mediated learning environment. **Outcomes** *What are the desired results of quality learning?* Significant learning outcomes extend beyond the scope of specific curriculum expectations. Quality learning experiences potentially enable learners to be: - Responsible citizens - Self-directed learners - Effective communicators - Collaborative team players - Critical/Creative thinkers Learning requires purposeful involvement. **Conditions** Teachers foster purposeful involvement by: - engaging learners emotionally, socially, physically and intellectually - encouraging learners' autonomy and initiative through choice - nurturing learners' natural curiosity - using a variety of instructional strategies - knowing what is important to learners and the community - providing access to a variety of appropriate resources - modelling excitement of learning - connecting learning to individuals' lives - exploring various learning settings - introducing relevant technology to serve as a learning tool --- **Indicators** Learners demonstrate purposeful involvement by: - engaging in meaningful work - seeking challenges - persisting with challenging tasks - explaining why they do what they do - making choices and responsible decisions - making personal connections to learning - choosing strategies appropriate to a specific task - taking initiative to extend learning - articulating the importance of their work - taking risks --- *Come to the edge,* he said. *They said: We are afraid.* *Come to the edge,* he said. *They came.* *He pushed them... and they flew...* —Guillaume Apollinaire Interpersonal relationships are essential to the learning process. **Conditions** Teachers foster interpersonal relationships by: - providing time and opportunities to build relationships - focusing on teaching cooperative and collaborative skills - reinforcing behaviours and attitudes which target care and respect - modelling positive interrelationships - enabling learners to engage in conflict resolution - stressing development of communication skills - organizing flexible groupings - respecting all members of the learning community - responding to the experiences, ideas and issues of others **Indicators** Learners develop interpersonal relationships by: - making meaningful contributions to the learning community - listening to others with respect - empathizing with others - working collaboratively with others - helping one another - taking responsibility and initiative in group process and conflict resolution - speaking with conviction - communicating with confidence - valuing the right to voice opinions *Not chaos-like, together crushed and bruised, But, as the world harmoniously confused: Where order in variety we see. And where, though all things differ, all agree.* — Alexander Pope Knowledge is constructed within a climate of inquiry. **Conditions** Teachers foster construction of knowledge within a climate of inquiry by: - bridging prior knowledge with new learning - valuing exploration and experimentation - encouraging individual and collective reflection - providing time for learners to construct relationships and build connections - facilitating meaningful dialogue - modelling and teaching a variety of learning strategies - creating a culture of questioning - engaging learners in higher order thinking - connecting mandated curriculum to learners' lives and societal issues - adapting curriculum to address specific needs - ensuring access to the learning context - linking theoretical and practical knowledge - transferring learning across settings and subject areas - encouraging risk-taking and flexibility - introducing various strategies for organizing ideas - providing open-ended problems, questions, projects - promoting independence and interdependence **Indicators** Learners demonstrate construction of knowledge by: - engaging in inquiry - constructing meaning independently and collectively - explaining their understanding of relationships among ideas - defining purposes for learning - evaluating their choices and decisions - asking questions and solving problems - participating in group talk to achieve deeper understandings - using precise language to clarify thinking and express ideas - using problem solving strategies - expressing understandings in various modalities - applying knowledge in new situations - assisting one another - Using metacognitive strategies *New frameworks are like climbing a mountain – the larger view encompasses, rather than rejects the earlier more restricted view.* – Albert Einstein Clear expectations and relevant feedback are needed. **Conditions** Teachers provide clear expectations and relevant feedback by: - expecting learners to demonstrate their understanding - encouraging and valuing personal excellence - developing standards of achievement with learners - communicating clear, challenging expectations for learning and behaviour - negotiating classroom activities - involving learners in setting criteria for learning and behaviour - assessing learning in a variety of ways - ensuring that evaluation strategies reflect the intended learning - articulating curriculum requirements to learners and parents - providing ongoing feedback about learning processes and products - helping learners understand how they learn - soliciting learner input - responding to varied learning styles and multiple intelligences --- **Indicators** Learners respond to clear expectations and relevant feedback by: - taking responsibility and ownership for learning and behaviour - striving for quality products and personal excellence - valuing their own progress - engaging in self-assessment and reflection - setting realistic goals - making continuous progress - demonstrating confidence in themselves and their own abilities - developing self-awareness - representing understandings in varied and authentic ways - inviting feedback and responding to information - developing metacognitive awareness --- *The mind is not a vessel to be filled but a fire to be kindled.* —Plutarch Diversity is valued within a responsive environment. **Conditions** Teachers value diversity within a responsive environment by: - including all members within the learning community - connecting home, community and school experience - sharing responsibility for all students - committing to the examination of beliefs, attitudes, policies and organizations - planning for varied developmental stages and learning rates - gaining knowledge about different learners - encouraging alternate and diverse learner responses - shifting instructional strategies based on learner responses - recognizing the validity of different world views and life experiences - addressing equity of resources and opportunities - responding to increasingly complex and diverse learner needs --- **Indicators** Learners value diversity by: - respecting others' rights to different beliefs and values - communicating in an open, honest respectful manner - engaging and connecting with the ideas of others - utilizing diverse viewpoints in a learning context - accepting one another - appreciating different forms of expression - sharing beliefs and experiences with one another - developing flexibility in approaches to learning --- *Some say knowledge is power but that is not true. Character is power.* — Sathya Sai Baba Significant Learner Outcomes "Education is responsible for ensuring that all students have the opportunity to acquire the knowledge, skills, and attitudes needed to be self-reliant, responsible, caring, and contributing members of society" (Alberta Education Mandate, 1995). The Calgary Board of Education has expanded on Alberta Education's mandate in the document, *Expectations for Student Performance*. The competencies and attributes listed below are taken from this CBE document. They have been reorganized into five broad headings. **Responsible Citizens** - understand and value their own culture and cultures of others - understand historical and global perspectives - identify moral and ethical implications for decision making - identify with the context of community - seek information about all sides of social, political, economical, and environmental issues - take ownership for their own actions and choices - articulate and live by personal values and beliefs which demonstrate respect for themselves and others - exhibit caring, honesty, integrity, justice and personal ethics **Self-Directed Learners** - understand their academic, physical, emotional, social and creative strengths to enhance their personal development - adapt and exercise flexibility while maintaining personal values and principles - are self-confident and have positive self-esteem - make their own decisions, free of peer pressure - persist in accomplishing meaningful work - embark on lifelong learning to cope with change and enhance physical and personal wellness **Critical and Creative Thinkers** - think for themselves (creatively, analytically, critically, reflectively and aesthetically) - solve problems and make decisions - access, analyze and synthesize information - perceive and make connections - exhibit appreciation and understanding of fine and practical arts **Collaborative Team Players** - interact positively with others - collaborate, cooperate, build consensus, debate, discuss and assert - are confident in their ability to make a difference - initiate and sustain strong positive relationships - appreciate and accept cultural and personal differences **Effective Communicators** - communicate effectively in oral, written and aural English and other languages - are competent in numeracy and in scientific, computer, visual and media literacy Guiding References Books Brooks, J.G., & Brooks, M.G. (1993). *In Search of Understanding: The Case for Constructivist Classrooms*. Virginia: ASCD. Covey, S. (1989). *The Seven Habits of Highly Effective People: Restoring the Character Ethic*. New York: Simon and Schuster. Danielson, C. (1996). *Enhancing Professional Practice: a Framework for Teaching*. Virginia: ASCD. Darling-Hammond, L. (1997). *The Right to Learn*. New York: Simon and Schuster Langford, D.P. & Cleary, B.A. (1995). *Orchestrating Learning with Quality*. Milwaukee: ASQC, Quality Press. Larrabee, M.J. (Ed.). (1992). *An Ethic of Care*. Georgetown: Routledge, Chapman, Hall, Inc. Wheatley, M. (1996). *A Simpler Way*. San Francisco: Bennett-Koehler. Calgary Board of Education Documents Calgary Board of Education. (1994). *Postmodernism and Constructivism: The Changing Educational Landscape*. Reader Service. Calgary Board of Education. (1996). *Examining Curriculum*. Reader Service, 1. Calgary Board of Education. (1996). *Quality Learning* (draft). Calgary, AB.: Dr.S. Ditchburn (chair), Elementary Panel. Calgary Board of Education. (1996). *Calgary Board of Education: A Learning Community* (draft). Calgary, AB.: P. Dowswell, T. Lewis, & I. Rollins. Calgary Board of Education. (1997). *Final Report and Recommendations for Action*. Calgary, AB.: The Facilitator and Steering Committee on Diversity and Equity. Calgary Board of Education. (1997). *Diversity: A Synthesis*. Calgary, AB.: S. Paget. Calgary Board of Education. (1997). *Three Year Education Plan 1997-2000* (draft). Calgary, AB. Calgary Board of Education. (1997). *Accountability Services: A Framework* (draft). Calgary, AB.: Dr. J. Holmes Calgary Board of Education. (1997). *Summer Reading*. Reader Service, 2(5). Michaels, D. (1997, April). *Characteristics of Quality Learning in Classrooms*. Professional Development Day Address, Crescent Heights High School, Calgary, AB. Paper Murray, J., Harris, D., Ikin, R., Pettit, J., & Warren, S. (1994). *Quality Teaching, Quality Learning*. http://www.dse.rsw.edu.au/F2.0/material/qtql.htm. New South Wales: Department of School Education.
ABSENT SUPERFICIAL ABDOMINAL REFLEXES IN CHILDREN WITH SCOLIOSIS AN EARLY INDICATOR OF SYRINGOMYELIA HAMID G. ZADEH, SAMIR A. SAKKA, MICHAEL P. POWELL, MIN H. MEHTA From the Royal National Orthopaedic Hospital Trust, Stanmore, England We describe 12 children with idiopathic scoliosis who had a persistent absent superficial abdominal reflex (SAR) on routine neurological examination. MRI showed syringomyelia to be present in ten. The average age at detection of the scoliosis was 4.3 years and at diagnosis of syringomyelia 6.6 years. In all ten children the SAR was consistently absent on the same side as the convexity of the curve. In two it was the only abnormal neurological sign. An absent SAR in patients with scoliosis is an indication for investigation for underlying syringomyelia. In the children with syringomyelia, six had thoracic and four thoracolumbar curves. The clinical features differed in the two groups. Patients with thoracic curves were generally asymptomatic. Their neurological signs were subtle and none had any motor signs. By contrast, patients with thoracolumbar curves had symptoms and neurological signs. Abnormal gait was present in all four patients with thoracolumbar curves. In three this was due to considerable motor weakness. In eight children syringomyelia was associated with a Chiari-I malformation. In seven the syrinx was treated surgically by decompression of the foramen magnum. J Bone Joint Surg [Br] 1995;77-B:762-7. Received 3 January 1995; Accepted 2 March 1995 The term ‘syringomyelia’ derives from the Greek words for ‘tube’ and ‘marrow’ and was first described by Ollivier in 1827. The presenting features are diverse (Williams 1979). In children or adolescents, scoliosis is found in over 50% of cases (Tashiro et al 1987; Gurr, Taylor and Stobo 1988; Burwell et al 1992; Williams 1992). Arai et al (1993), in a comprehensive study, reported that 4.0% of patients with scoliosis with curves larger than 20° had syringomyelia. New imaging techniques and improved clinical awareness have identified more patients with idiopathic scoliosis who have syringomyelia (Nohria and Oakes 1990). It is progressive and early diagnosis and treatment are therefore paramount (Williams 1992). Our aim was to indicate the clinical features of importance in the early detection of syringomyelia with special reference to the superficial abdominal reflex (SAR) and to report our experience of the use of corrective plaster jackets and decompression of the foramen magnum in the management of these patients. PATIENTS AND METHODS Among patients with scoliosis referred to our unit are children with infantile or juvenile idiopathic scoliosis. Routine clinical assessment includes a detailed neurological examination and recording of the tendon reflexes, the plantar response and SAR. It has been our policy since 1985 to suspect syringomyelia when an absent SAR was detected, even if there were no other neurological signs. Between 1986 and 1993 we observed 12 such children with an abnormal SAR. MRI showed syringomyelia to be present in ten. In the other two no intracranial or intraspinal lesions were demonstrated. There were six girls and four boys. Their average age at the detection of the scoliosis was 4.3 years (1 year 8 months to 6 years 7 months) and at presentation to our unit for the first time 5.8 years (2 years 5 months to 7 years 3 months). The time of diagnosis of syringomyelia was 6.6 years (4 years 11 months to 11 years 9 months). The average follow-up was 4.6 years (1 year 3 months to 8 years 8 months). RESULTS Six children had thoracic and four thoracolumbar curves. The clinical features differed in these two groups. Those with thoracic curves and syringomyelia were asymptomatic and had few abnormal physical signs. None had motor signs, but four had abnormal sensation over the trunk. By contrast, all four patients with thoracolumbar curves had... symptoms and obvious neurological signs. An abnormal gait was common in all four and in three this was due to considerable motor weakness. Abnormal sensation was noted in three patients. In both groups abnormal reflexes were present (Table I). In all ten children the SAR was absent on the same side as the convexity of the curve. That on the side opposite to the convexity of the curve was more variable and was present in three patients but only partially in two. In one patient partial recovery was noted after surgical decompression of the syrinx. The average Cobb angle at first presentation was 34° and at the latest follow-up examination 38°. Orthopaedic treatment of the scoliosis consisted of the application of serial corrective plaster jackets or removable braces. General anaesthesia was necessary to apply plaster jackets in younger children. The brace was used during periods of slow growth (as judged by growth charts), when the scoliosis had been nearly corrected or during warm summer months. MRI findings in eight patients showed the syrinx to be associated with a Chiari-I malformation (Fig. 1). In one, it was associated with tethering of the conus to S2 and an intraspinal lipoma, and in another it was multiloculated. In nine children the syrinx has been managed surgically. Seven had decompression of the foramen magnum (performed by MPP). Two other children were operated on at different neurosurgical units. One had insertion of a syringoarachnoid shunt and the other lumbar laminectomy and syringoperitoneal shunting. For the child with a multiloculated syrinx there was no satisfactory neurosurgical procedure. The average follow-up of the seven children who had decompression of the foramen magnum was 3.4 years (11 months to 6 years 6 months). All had MRI at 9 to 12 months after the operation to assess the degree of decompression (Fig. 2). This showed that two required further surgery; in one a syringopleural shunt was inserted and in the other a C1 arch division was performed. Further MRI in both patients confirmed satisfactory decompression of the syrinxes. Obvious neurological improvement has been noted in five patients, and in two who had mild sensory deficits no further deterioration has occurred. After combined orthopaedic and neurosurgical treatment the scoliosis improved in three patients, stabilised in three and progressed in four. None of the children has had surgical correction of their scoliosis, but in some this may be necessary in the future. DISCUSSION The key to the early diagnosis of syringomyelia in scoliosis is a high index of suspicion and a thorough neurological examination (Huebert and MacKinnon 1969). A number of authors have reported that all affected patients have abnormal neurological signs (Depotter et al 1987; Phillips, Hensinger and Kling 1990; Lena et al 1992; Arai et al 1993). We agree with this observation, although in two of our ten patients the only abnormal neurological finding was an absent SAR. Lewonowski, King and Nelson (1992) reported that in 26 patients with idiopathic scoliosis under 11 years of age without neurological signs, MRI showed abnormal intraspinal pathology with Chiari-I malformation in five. They did not, however, record the SAR. The SAR is a part of routine neurological examination and was first described by Rosenbach (1876). Lonnum (1956) reported that this reflex is present in the newborn and infants and Madonick (1957) stated that it may be absent in over 10% of normal individuals less than 50 years of age. Clinical work by Lonnum (1956) and EMG studies by Teasdall and Magladery (1959) showed that it is essentially a spinal reflex which can be modified by activity from higher centres in the CNS via the pyramidal tracts. An abnormal SAR in scoliosis associated with syringomyelia has been mentioned by a number of authors (Mehta 1992; Arai et al 1993; Charry et al 1994). In our experience an absent SAR on the same side as the convexity of the spinal curve is a consistent and early physical sign in the pathological evolution of this disorder. In some patients this may be the only abnormal neurological sign and precedes the development of other such signs. In young children detailed neurological examination is often difficult. Routine MRI in children with scoliosis may be desirable but it is impractical for two main reasons. First, MRI is costly and not readily available to all specialist units and secondly young children do not tolerate it well and general anaesthesia is often necessary to obtain good-quality images. An absent SAR is therefore a reliable and useful indicator for selecting those children likely to have syringomyelia. Radiological features which suggest syringomyelia include an increase in the width and depth of the cervical canal, bony abnormalities at the craniocervical junction, diastematomyelia, and occipitalisation of the atlas (McRae and Standen 1966; Williams 1979). They are often difficult to detect and are generally observed retrospectively. MRI findings in the Chiari-I malformation include herniation of the cerebellar tonsils by more than 5 mm, reduction in the subarachnoid space both anterior to the brain stem and posteroinferior to the cerebellum, descent of the brain stem, syringobulbia and cervicomедullary kinking (Pillay et al 1991) (Fig. 1). The primary and the most important step in the successful treatment of scoliosis with syringomyelia is early surgical decompression of the syrinx. There are many different operative techniques and the ideal procedure is somewhat controversial (Madsen, Green and Bowen 1995). In patients with Chiari-I malformation the pathogenesis of syringomyelia appears to be due to the phenomenon of craniospinal pressure dissociation sometimes described as 'suck' (Williams 1969, 1980). Clinical and experimental work has shown that after episodes of raised thoracoabdominal pressure the herniated hindbrain or cerebellar tonsils behave as a valve and block the normal redistribution of the pressure between the subarachnoid space of the cranium and the spinal cord. The resultant pressure difference drives the CSF from the high-pressure ventricular system into the central canal of the cord which develops into the 'syrinx'. The basic principle of hindbrain decompression is to reverse this phenomenon and to release the neural tissue impacted in the foramen magnum. In 1965 Gardner first performed decompression surgery in which the foramen magnum was enlarged posteriorly, the lamina of the upper cervical vertebrae was removed, the dura was opened widely and the floor of the fourth ventricle was incised. The communication between the syrinx and | Case | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | |------|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----| | Sex | M | F | M | M | F | F | F | F | M | M | | Age (yr mth) | 7 7 | 12 5 | 8 10 | 13 8 | 12 5 | 10 2 | 10 10 | 6 2 | 8 1 | 14 6 | | Age at detection of scoliosis (yr mth) | 5 | 3 6 | 2 | 4 | 5 3 | 5 8 | 1 8 | 6 7 | 3 6 | | | Age at diagnosis of syringomyelia (yr mth) | 11 9 | 6 6 | 7 | 6 2 | 6 3 | 7 2 | 4 11 | 7 11 | 8 7 | | | Type of curve | Right thoracolumbar | Left thoracolumbar | Right thoracolumbar | Left thoracolumbar | Right thoracic | Right thoracic | Asymptomatic | Asymptomatic | Left thoracic | Asymptomatic | | Symptoms | Abnormal gait, Right hemiparesis | Abnormal gait | Abnormal gait | Abnormal gait | Abnormal gait | Right hemiparesis, Enuresis | Headaches | Up going plantars and plantars | Right thoracic | Asymptomatic | | Abnormal sensory signs | Yes | No | Yes | Yes | Yes | Yes | No | Yes | Yes | No | | Abnormal motor signs | -/+ | +/- | +/- | +/- | +/- | +/- | +/- | +/- | +/- | +/- | | Initial Cobb angle (degrees) | .30 | 24 | 28 | 38 | 25 | 31 | 62 | 38 | 24 | | | Latest Cobb angle (degrees) | 33 | 10 | 41 | 61 | 27 | 48 | 52 | 60 | 26 | | | MRI findings other than the syrinx | Chiari I | Chiari I | Chiari I | Chiari I | Chiari I | Chiari I | Chiari I | Chiari I | Multiloculated syrinx | Chiari I | | Operative procedure | Foramen magnum decompression | Foramen magnum decompression | Foramen magnum decompression | Foramen magnum decompression | Foramen magnum decompression | Foramen magnum decompression | Foramen magnum decompression | Foramen magnum decompression | Foramen magnum decompression | Foramen magnum decompression | | Revision surgery | - | - | - | - | - | - | - | - | - | - | | Postoperative neurological progress | Improved | Improved | Improved | Improved | Improved | Improved | Improved | Improved | Improved | Improved | VOL. 77-B, NO. 5, SEPTEMBER 1995 the fourth ventricle was closed by a plug of muscle in the obex. Since then a number of authors have expressed reservations regarding the use of an obex plug and advocated modifications to the original procedure. Williams (1978) avoided the use of an obex plug and created an artificial cisterna magna by incision and suture of the dura and the arachnoid. Matsumoto and Symon (1989) reported a higher mortality and complication rate with Gardner’s operation and recommended craniocervical decompression and syringoperitoneal shunting. Logue and Edwards (1981) also observed a higher complication rate with Gardner’s operation and preferred a simple posterior decompression leaving the dura open but preserving the arachnoid membrane. Syringostomy is reserved for selected cases. In foramen magnum decompression the cerebellar tonsil herniation is decompressed by a combination of C1 laminectomy and enlargement of the foramen magnum, without opening the dural sac. In our first patient the dura was opened and a fascia lata graft inserted. Despite neurological improvement, this patient had an inadequate decompression of the syrinx and later required a second operation to insert a syringopleural shunt. In the next six operations the dura was left intact. In one patient MRI revealed the remnants of the lamina of C1 which appeared to be the cause of incomplete decompression of the syrinx. Further surgery was required in the form of C1 arch division. Overall, our short-term results appear to be satisfactory. The latest MRI in all seven patients confirmed complete collapse of the syrinx. None of our patients has deteriorated and five have shown considerable neurological improvement. In idiopathic infantile or childhood scoliosis a plaster jacket is used to counteract the effect of scoliosis and the child’s natural growth is used to correct the residual spinal and ribcage deformities (Mehta 1984). For scoliosis associated with syringomyelia, however, the results are less predictable. We have observed that a plaster jacket can control the spinal curve, but due to the underlying neurological pathology the deformity can rapidly deteriorate if this treatment is discontinued. We also found that the use of a brace alone is inadequate. For advanced curves, the plaster jacket can control the deformity until adolescence, when surgical correction may be undertaken. For small curves, early surgical decompression of the syrinx and a plaster jacket can control or halt the progression of the curve until skeletal maturity and avoid the need for the surgical correction of the scoliosis. We believe that a plaster jacket in combination with surgical decompression of the syringomyelia slows down or halts the progression of the spinal deformity and is useful in improving the cosmetic appearance of the ribcage deformity. **Conclusions** 1) Syringomyelia is an important cause of scoliosis in children and diagnosis relies on a high index of suspicion and a thorough neurological examination. 2) Neurological signs in patients who present with thoracic curves are often subtle. 3) An absent SAR on the same side as the convexity of the curve is an early and sensitive indicator of underlying syringomyelia. Sometimes this may be the only abnormal neurological sign. 4) Early surgical decompression of the syrinx is associated with recovery or stabilisation of the neurological deficit and reduction of the rate of progression of the scoliosis. 5) Preliminary results after decompression of the foramen magnum are favourable. No benefits in any form have been received or will be received from a commercial party related directly or indirectly to the subject of this article. **REFERENCES** Arai S, Ohtsuka Y, Moriya H, Kitahara H, Minami S. Scoliosis associated with syringomyelia. *Spine* 1993;18:1591-2. Burwell RG, Cole AA, Cook TA, et al. Pathogenesis of idiopathic scoliosis: the Nottingham concept. *Acta Orthop Belg* 1992;58 (Suppl 1):33-58. Charry O, Koop S, Winter R, et al. Syringomyelia and scoliosis: a review of twenty-five pediatric patients. *J Pediatr Orthop* 1994;14:309-17. Depotter J, Rigault P, Pouliquen JC, et al. Syringomyelie et scoliose chez l’enfant et l’adolescent: à propos de 14 cas. *Rev Chir Orthop* 1987;73:203-12. Gardner WJ. Hydrodynamic mechanism of syringomyelia: its relationship to myelocoele. *J Neurol Neurosurg Psychiat* 1965;28:247-59. Gurr KR, Taylor TKF, Stobo P. Syringomyelia and scoliosis in childhood and adolescence. *J Bone Joint Surg [Br]* 1988;70-B:159. Huebert HT, MacKinnon WB. Syringomyelia and scoliosis. *J Bone Joint Surg [Br]* 1969;51-B:338-43. Lena G, Boudawara Z, Genitori L, Cavalheiro S, Choux M. 14 cases of communicating syringomyelia associated with Chiari I malformation in children. *Neurochirurgie* 1992;38:297-303. Lewonowski K, King JD, Nelson MD. Routine use of magnetic resonance imaging in idiopathic scoliosis patients less than eleven years of age. *Spine* 1992;17(Suppl 6):109-16. Logue V, Edwards MR. Syringomyelia and its surgical treatment: an analysis of 75 patients. *J Neurol Neurosurg Psychiatry* 1981;44:273-84. Lonnum A. The abdominal skin reflexes in man. *Acta Psychiat et Neurol* 1956;(Suppl 108):243-53. Madonick MJ. Statistical control studies in neurology, 8: the cutaneous abdominal reflex. *Neurology* 1957;7:459-65. Madsen PW, Green BA, Bowen BC. Syringomyelia. In: Rothman RH, Simeone FA. *The spine*. Vol. 2, 3rd ed. Philadelphia: WB Saunders Company, 1992:1575-60. Matsumoto T, Symon L. Surgical management of syringomyelia: current results. *Surg Neurol* 1989;32:258-65. McRae DL, Standen J. Roentgenologic findings in syringomyelia and hydromyelia. *Am J Roentgenol* 1966;98:695-703. Mehta MH. Infantile idiopathic scoliosis. In: Dickson RA, Bradford DS, eds. *Orthopaedics 2. Management of spinal deformities*. London, etc.: Butterworths, 1984:101-20. Mehta MH. The conservative management of juvenile idiopathic scoliosis. *Acta Orthop Belg* 1992;58 (Suppl 1):91-7. Nohria V, Oakes WJ. Chiari I malformation: a review of 43 patients. *Pediatr Neurosurg* 1990-91;16:222-7. Ollivier CP. Traité de la moelle épinière et de ses maladies. Paris. *Crevot* 1827;178. Phillips WA, Hensinger RN, Kling TF. Management of scoliosis due to syringomyelia in childhood and adolescence. *J Pediatr Orthop* 1990;10:351-4. Pillay PK, Awad IA, Little JR, Hahn JF. Surgical management of syringomyelia: a five year experience in the era of magnetic resonance imaging. *Neurol Res* 1991;13:3-9. Rosenbach O. Ein Beitrag Zur Sympt Cerebraler Hemiplegien. *Arch Psychiat* 1876;6:845-51. Tashiro K, Fukazawa T, Moriwaka F, et al. Syringomyelic syndrome: clinical features in 31 cases confirmed by CT myelography or magnetic resonance imaging. *J Neurol* 1987;235:26-30. Teasdale RD, Maglader JW. Superficial abdominal reflexes in man. *Arch Neurol Psychiat* 1959;81:28-36. Williams B. The distending force in the production of 'communicating syringomyelia'. *Lancet* 1969;2:189-93. Williams B. A critical appraisal of posterior fossa surgery for communicating syringomyelia. *Brain* 1978;101:223-50. Williams B. Orthopaedic features in the presentation of syringomyelia. *J Bone Joint Surg [Br]* 1979;61-B:314-23. Williams B. On the pathogenesis of syringomyelia: a review. *J R Soc Med* 1980;73:798-806. Williams B. Syringomyelia. In: Findlay G, Owen R. *Surgery of the spine: a combined orthopaedic and neurosurgical approach*. Vol. 2. Oxford: Blackwell Scientific Publications, 1992:891-906.
The Tasmanian Championships, 1947 G. Chisholm (Victoria) THE 1947 Tasmanian and Inter-Club events were held on Mount Mawson in the National Park. This snowfield is in the south of the State, some 60 miles from Hobart. The road from Hobart follows the picturesque Derwent River Valley through New Norfolk and similar beauty spots in this fertile area. The neatly laid out orchards and hop fields make this as pleasant an approach as anywhere in Australia. The road ends at Lake Dobson 3400 ft., and if you are lucky enough to find it clear of snow it is a fast trip from Hobart. However, during and after heavy snowfalls, you may only get within three or four miles of Lake Dobson. The four government huts are situated a few hundred yards from the end of the road, and are known as Lake Dobson huts. They are a good class of hut, being all timber construction, with accommodation for fifty persons. They are known individually as "Telopea," "Fagus," "Eucalypt" and "Pandanni." All through this lovely area you find names that are both strange and new, with a ring that appeals to you immediately, and as they and their glorious surroundings are opened to you, it is with a determined "I'll be back" feeling that you finally leave. During the Championship Meeting, those at these huts were catered for by a cook and his assistant. All these arrangements, and others you will notice later were arranged by the Southern Section of the Tasmanian Ski Council, who had charge of this meeting. All those involved deserve high praise for the capable manner in which they carried out their duties. At the other end of Lake Dobson, is the Walking Club Hut and a half mile further on is the Alpine Club Hut on the shores of Eagle Tarn. There are also well-built and well appointed huts. All these huts are at the foot of Mount Mawson, which rises a 1000 ft. above them in a little over half a mile. This is the only disadvantage of the low-level huts, as those living in them have this stiff climb each morning before reaching the good ski-ing slopes, and the snow is not often good enough to get a good run home. For competitors in race events it is very trying, especially as there is very little shelter on Mawson, and a picnic lunch is not fun for long in bad weather, and we had that! The University Hut is on the tree line, at the top of this climb. It is the smallest hut, but not because of lack of energy on the part of the club members, as everything of, and in it, was carried there on the backs of these snow slaves. All these huts were connected by field telephone, which proved highly efficient, and well-worth the trouble of laying the lines in the deep soft snow on this heavily wooded slope. As the Meeting was timed for a rather short period, an attempt was made to try and get some events started the first morning, although the weather, as at other Australian and New Zealand Meetings in 1947, kept getting worse, even when this seemed impossible. However, the Slalom was held on a comparatively sheltered slope known as the Golden Stairs. This slope has some 400 vertical feet, and an average of 35 deg. This seems hard to believe until one learns that it was cleared by an avalanche, and can see the way trees and bushes have been knocked about by other large falls of snow in the vicinity. On many slopes the Slalom set would have been quite open, as gates were set wide apart and pairs well separated. Here, it proved too tight, owing to the steepness of the slope which was underestimated during setting. Naylor gave a very polished display to gain first place. He showed sound judgment in not rushing things, which proved the downfall of most of the others. Results of two runs were as follows: 1. R. Naylor 2 mins. 11 secs. 2. D. Wilson 2 mins. 31 4-5 secs. 3. R. Tilley 2 mins. 33 1-5 secs. The blizzard was so bad, many competitors did not start, and officials who stayed at their post, had to be thawed out with the help of heating fluid. The Women's Slalom was held under conditions almost as bad, but on a less severe slope. The women's standard is below the men's, although several of them only need a little more racing on steeper slopes, and a little coaching from some of the more experienced skiers, to improve very rapidly. Results of the two runs in the Slalom were: 1. M. Gibson 1 min. 29 secs. 2. N. Hunter 1 min. 52 secs. 3. E. Masterman 1 min. 55 sec.s As things were now getting wet in addition to everything else, the meeting was postponed for one month. The first week-end in October found a lot of snow gone from the exposed north faces and lower slopes, but tarns were still frozen, and Golden Stairs had a greater depth of snow than in August. On a glorious day the Downhill was set, a mile from the University Hut. The course started from the top of the Rodway Range running down onto Tarn Shelf. This gave about 400 vertical feet and it was decided to hold the race in two sections, but, owing to an accident to Dave Wilson this was reduced to one run. The snow was wet and sticky, and with more rocks putting in their appearance than a month previously, it was with trepidation that the race was declared on, even with two controls. Wilson, the first competitor, just mis-cued slightly at the last control, and not regaining his balance, fell near the finish and met a rock. When the race was again started some hours later, a newcomer to racing, 16-year-old Mark Wolfhagen, won the event with a very smooth run, and the fact that he did not once falter in this tricky snow, gave him the race. Results were as follows: 1. M. Wolfhagen 2 mins. 24 3-5 secs. 2. H. von See 2 mins. 25 1-5 secs. 3. R. Tilley 2 mins. 26 2-5 secs. Women's Downhill.—The Women's Downhill was run on a section of the men's course and in similarly poor snow. After many warnings against a straight schuss, they all ran the course nicely under control. Results were as follows: 1. M. Gibson 44 secs. 2. E. Masterman 47 secs. 3. N. Wolfhagen 59 secs. Making the most of the good weather it was decided to hold the jump the same day. Owing to the lack of snow a jumping hill was hard to find. Undeterred, and despite an acute shortage of judges a jump was built with snow that would not pack and an out-run on to possibly frozen Mackenzie Tarn. Tilly gave the best display to win three confident jumps of 12 and 11 metres. Results were as follows: | Pts. | Name | Distance | |------|----------|----------| | 1 | R. Tilley| 11 & 12m. 142.2 | | 2 | M. Wolfhagen | 9 & 9m. 125.4 | | 3 | R. Naylor | 9 & 11m. 120.4 | The Tarn proved to be sufficiently frozen, but was it wet when a jumper fell on the out-run! The Langlauf was held next day in our familiar August cloak of near-blizzard. However, there were still breaks in the clouds sufficient to get glimpses of the surrounding country. The course started along the Tarn Shelf, which, as the name implies is a shelf or ledge on the mountain side, varying in width from 100 yards to half a mile. The Rodway Range rises steeply 500-600 ft. on one side and there are mighty rock precipices falling some 700 ft. to Lake Seal on the other. Along the four miles this shelf extends is a series of large and small tarns and even a lake or two, all still frozen and enclosed by slopes wooded with Pencil and gnarled and twisted King Billy Pines. The course ran over seven of these tarns to Lake Newdigate, where, in a hut of the same name the far control was stationed. This is a rest hut for the Ski Club of Tasmania when they are en route to their headquarters at Twilight Farm, a mile further on. Twilight Farm is also provisioned these days by manpower. The energy and enthusiasm you meet on every hand among those who go to the mountains on this small island, is, I am sure, in export quantities. After turning Newdigate Hut the Shelf was left behind and the long climb to the top of the Rodway Range begun. Once the summit was gained this was followed to the finish. The wild and unexplored country seen through frames of broken mist from this section bewilders description. Unexplored because of the dense rain forests which cover its rugged ranges and peaks, King William Range and beautifully formed Frenchman's Cap out towards the West Coast have a particularly "come hither" appeal. May be, next trip! On along the top the racers fought the awakening blizzard, which made the track hard to follow, but eventually they did arrive at the finish in the following order:— 1. R. Tilley 1hr. .05mins. 2. R. Naylor 1hr. 10mins. 3. E. Mills 1hr. 13mins. The Tasmanian standard of skiing is improving in leaps and bounds. Naylor and Tilley with only hard practice and self-criticism plus a fleeting trip to the mainland to spur them on have shown enormous improvement over the last twelve months. M. Wolfhagen, the youngest of them all, will be a three-event, and when age permits, a four-event man to watch in the near future. D. Wilson had bad luck this year, but has also improved. These boys are taking the place of the Tasmanians we know so well from past years and should carry on the high standards and traditions set by their earlier State representatives. While the runs do not have the length we enjoy on the Mainland there is so much beauty in the National Park that it is one place where a skier can really enjoy a touring holiday. The snow generally is of a soft wet nature, and tests your technique, but, as usual, with snow like this, it is a lot easier to handle on the steeper slopes, and these are in plentiful supply and are crying out to be used. The rocks that abound everywhere rather like Buffalo, are something of a mental hazard, and it takes a while to become accustomed to them. Thus ended a most enjoyable trip for me, a newcomer to these regions. The congenial hut life, the spontaneous welcome that is so flattering, the new and world famous scenery, all make a climax in snowland enjoyment. I can recommend the dose to anyone on the mainland in need of a change of snow scenery among a community of snow folk who will do their utmost to help you have a good time. I am certainly going back. --- **The Northern Tasmanian Alpine Club** One of the most notable features of the 1947 winter on Ben Lomond was the excellent week-end weather which prevailed throughout the season from June to October. The snowfalls, most of which occurred during mid-week periods, provided plenty of snow of a generally good quality and the exceptionally good week-end weather enabled members who visit the mountain from Friday evening to Sunday to have more skiing hours than are obtainable in most seasons. When the first snow fell, Club members were working against time to finish further additions to the Summit Hut. The full programme was not completed, but a new men's bunk room accommodating fifteen was made ready for use, a work room was sufficiently advanced to be used for storage purposes, a new internal water supply was provided, and, finally, despite the raging of a blizzard, a new roof window was fitted in the Women's room. The work is being continued this year and it is hoped that, in addition to completing what has already been commenced, the Summit Hut will this winter have added to it a new food and fuel store and a motor house containing an electric lighting plant. The Club conducted a very successful series of competitive events during the season, the results being: Jump Championship: R. R. Vial 1, E. D. Mills, H. L. von See 3. Langlauf Championship: E. D. Mills 1, R. F. Tilley 2. Slalom Championship: R. F. Tilley 1, R. W. Naylor 2, H. L. von See 3. Downhill Championship: R. W. Naylor 1, H. L. von See 2, S. R. Tilley Jnr. 3. Slalom Handicap: T. Giles, S. Turnbull 2, S. Anderson 3. Downhill Handicap: S. V. Tilley 1, R. R. Vial 2, C. French 3. Women's Downhill and Slalom: A. Godfrey-Smith 1, D. Rolph 2. Novice Race: C. French 1, D. Smithies 2. In the Tasmanian Championships held at National Park, Club members obtained good results and the Club Team succeeded in retaining the Inter-Club Trophy. Two of the Club's members, R. W. Naylor and R. F. Tilley, competed in the Victorian Open Championships at Mt. Hotham very successfully; in the Slalom Naylor was first and Tilley fifth, in the Downhill Tilley second and Naylor third and in the combined result Naylor was second and Tilley third. In 1947 Ben Lomond was proclaimed a Reserve under the Scenery Preservation Act and the Board appointed to administer the area is taking a great interest in the future development of the mountain. It is expected that public accommodation for both day visitors and those making longer visits will soon be available at the tree-line and that the long hoped for road might soon be provided. The building of two or three miles of road required will bring skiing on Ben Lomond within 1½ hours of Launceston and 4 hours of Melbourne and will make possible the development that this fine snow-field deserves. Office Bearers 1947-48. President: C. K. Stackhouse. Vice-Presidents: E. D. Mills and F. Smithies. Honorary Secretary: R. G. Hall. Honorary Treasurer: Stanley V. Tilley. Committee: G. C. McKinlay, W. F. Mitchell, E. H. Smith, H. L. von See and R. Vial. The Alpine Club of Southern Tasmania THE close of the 1947 season completed the second year of the Club's existence, in which the Club's position has been further consolidated by the hard work and enthusiasm of members. Continued work on the Club's chalet at Eagle Tarn has increased comfort, while the financial position has improved considerably. Snow conditions in the Mt. Field Ranges were excellent. The Club chalet was used to capacity during the winter and members made many ski tours among the snow mountains of the National Park. The Club team competed in the Tamanian Ski Championships at Mt. Mawson, but were outclassed. Efforts will be made to build up a younger team for future events. In December, when the snow had receded to higher elevations members turned their attentions to cutting a ski track up the Newdigate Pass P. Canning heavily wooded east face of Mt. Mawson from Eagle Tarn 3600 feet to the University Club Hut at 4100 feet on Mt. Mawson. This involves a considerable amount of work as it is necessary to clear a path through dense bush and huge boulders. When completed the track will rise 500 vertical feet in about half a mile. This track will be an amenity that will be appreciated by all ski runners as the existing route to Mt. Mawson up the "Golden Stairs" can be most unpleasant under icy conditions, not to mention the danger of hitting a tree when descending. Members look forward to fast and safe wood-running this coming season, now that the hazards are being removed. Members are grateful to Mr. Fred Wilkins, one of the leading professional ski instructors in Eastern Canada, for tuition during the year which has resulted in a considerable improvement in the standard of skiing.
Politica e Religione Annuario di Teologia Politica Yearbook of Political Theology XI | 2021-2022 UNIVERSITÀ DI TRENTO ISSN 2612-6478 Politica e Religione Annuario di Teologia Politica | Yearbook of Political Theology ISSN (online) 2612-6478 Direttore responsabile | Editor-in-Chief: Michele Nicoletti (Trento) Coordinamento editoriale | Managing Editor: Tiziana Faitini (Trento) Redazione | Editorial Board: Gabriele Pulvirenti (Trento); Walter Rech (Helsinki); Giulia Valpione (ENS-Paris, CNRS) Comitato scientifico | Advisory Board: Andrea Aguti (Urbino) - Fausto Arici (Bologna) - Guido Boffi (Milano) - Giancarlo Caronella (Berlino) - Philippe Cheneaux (Paris-Roma) - Emanuela Colombi (Udine) - Hamid Dabashi (Columbia, NYC) - Dimitri D'Andrea (Firenze) - Carlo Fantappiè (Roma) - Dante Fedele (CNRS - Lille) - Giovanni Filoramo (Torino) - Hanna-Barbara Gerl-Falkovitz (Dresden) - Francesco Ghia (Trento) - Maurizio Giangiulio (Trento) - Massimo Giuliani (Trento) - Vittorio Hösle (Notre Dame, IN) - Robert A. Kolb (St. Louis MO) - Robert Krieg (Notre Dame, IN) - Roberto Lambertini (Macerata) - Hans Maier (München) - Nestore Pirillo (Trento) - Gian Luca Potestà (Milano) - Diego Quaglioni (Trento) - Marco Rizzi (Milano) - Debora Spini (Firenze) - Franco Todescan (Padova) - Natalino Valentini (Urbino) - Silvano Zucal (Trento) «Politica e Religione» è una rivista del Dipartimento di Lettere e Filosofia dell’Università degli Studi di Trento ed è pubblicata con il contributo del medesimo Dipartimento. È stata fondata nel 2007 e del suo Comitato Scientifico hanno fatto parte in passato | ‘Politica e Religione’ is a journal of the Department of Humanities of the University of Trento, which supports its publication. It was founded in 2007 and its Advisory Board has included in the past: Ernst-Wolfgang Böckenförde, Massimo Campanini, Paolo De Benedetti, Klaus Detloff, Hasan Hanafi, Hermann J. Pottmeyer, Gian Luigi Prato, Paolo Prodi, Michael Signer. Progetto grafico e impaginazione | Layout and Design: Susanna Saccomani Pubblicato da | Published by: Università degli Studi di Trento - via Calepina, 14 - I-38122 Trento www.unitn.it | email@example.com | https://teseo.unitn.it | firstname.lastname@example.org Copyright © 2024 gli Autori | the Authors Gli scritti proposti per la pubblicazione sono sottoposti a revisione a doppio cieco | Contributions are double-blind peer reviewed. L’edizione digitale è rilasciata con licenza Creative Commons CC BY-SA 4.0 This work is licensed under the Creative Commons CC BY-SA 4.0 License. DOI MODELS OF SOVEREIGNTY AND CIVIL RELIGIONS A possible dialogue between the writings of Erik Peterson and Eastern Orthodox theologians Ana Petrache Abstract. My paper focuses on Erik Peterson’s contribution to the classical debate on political theology, especially on his description of models of sovereignty: the divine monarchy model, the King of Persia Model, and the angels of the nations’ model, which form the basis of the Eusebian civil theology. Considering these models initially, I suggest a possible subsequent dialogue between Erik Peterson’s writings and Eastern Orthodox theology. Peterson’s focus on eschatology, ecclesiology, liturgy, and the Church Fathers makes his work relevant for the Orthodox tradition. In addition, his work critically confronts the frameworks of imperialism and nationalism, which represent the principal challenge for the Orthodox space. To a limited extent this discussion has already started, such as in the work of Cyril Hovorun, Pantelis Kalaitzidis or Christos Yannaras. However, a closer look into Peterson’s theological reflections, especially his deconstruction of the Eusebian model of symphonia based on a dogmatic reasoning, deserves further consideration. A critical assessment of the way religious language is used to construct models of sovereignty – first in the Hellenistic world, then later in the Roman Empire – lies at the heart of Peterson’s research. Questions of analogy and order and how religious narratives contribute to maintaining social bonding within a community, and thus the status quo, are central aspects of his work. Hence, engaging with Peterson’s ideas can provide useful insights for Orthodox theologians, who critically assess theological images and language adopted with respect to political realities. Keywords. Erik Peterson; Eusebius of Caesarea; Civil Theology; Civil religion; Sovereignty 1. Introduction: From Theologia civilis to Civil Religion As a scholar and an erudite, Erik Peterson\(^1\), historian of Late Antiquity, New Testament exegete, and enthusiast of Christian archeology, contests Carl Schmitt’s perspective on secularization. Peterson’s account focuses on inverting Schmitt’s theory\(^2\), who assumed that “all significant concepts of the modern theory of the state are secularized theological concepts”\(^3\). Peterson underlines that important concepts used by early Christians, such us *martyrion*, *leitourgia*, *ekklesia*, and even *basileia* – transformed into *basileia tou theou* (Kingdom of God) – are political concepts used by Christians to construct a theological language\(^4\). His account on the original usage of these concepts is subversive; adopting political images, yet attributing a differing spiritual meaning unto them means that original Christian language emphasizes a counter-political theology. Christ is portrayed as a counter-image to the emperor to suggest that a different way of life is possible: a life in which the eschatological hope for the Kingdom contrasts all empires of this world\(^5\). --- \(^1\) See the monumental work of B. Nichtweiß, *Erik Peterson, Neue Sicht auf Leben und Werk*, Herder, Freiburg am Breisgau, 1994, and G. Caronello (ed.), *Erik Peterson. La presenza teologica di un “outsider”*, Libreria Editrice Vaticana, Città del Vaticano 2012, or P. Büttgen - A. Rauwel (eds.) *Théologie politique et sciences sociales*, Ehess Editions, Paris 2019. \(^2\) On this debate M. Nicoletti, “Erik Peterson e Carl Schmitt. Ripensare un dibattito”, in: G. Caronello (ed.), *Erik Peterson. La presenza teologica di un “outsider”*, pp. 517-537, M. Nicoletti, *Trascendenza e Potere, La Teologia Politica di Carl Schmitt*, Istituto di Scienze Religiose in Trento, Brescia, 1990, pp. 415-427, M. Rizzi, “‘Nel frattempo…’ Osservazioni diverse su genesi e vicenda del ‘Monotheismus als politisches Problem’ di Erik Peterson”, in: P. Bettiolo - G. Filoramo (ed.), *Il Dio mortale. Teologie politiche tra antico e contemporaneo*, Morcelliana, Brescia 2002, pp. 397-423, B. Nichtweiß, “Vedere il nuovo attraverso la rottura, Quattro miniature come introduzione al pensiero di Erik Peterson”, in G. Garonello (ed.), *Erik Peterson, La presenza teologica di un “outsider”*, pp. 71-101. \(^3\) C. Schmitt, *Political Theology, Four Chapters on the Concept of Sovereignty*, ed. and trans. by G. Schwab, University of Chicago Press, Chicago 2005, p. 5. \(^4\) M. Pancheri, *Pensare “ai margini”. Escatologia, ecclesiologia e politica nell’itinerario di Erik Peterson*, Università degli Studi di Trento, Trento 2013, pp. 274-279, see also B. Nichtweiß, *Erik Peterson, Neue Sicht auf Leben und Werk*, p. 793, 795. \(^5\) E. Peterson, “Christ as Imperator”, in id., *Theological Tractates*, ed. and trans. by M.J. Hollerich, Stanford University Press, Stanford, 2011, pp. 143-150, p. 147. fore, as a witness to another way of life, as witnesses of Christ, and of his eschatological promises, Christians cannot engage in the cult or the emperor\textsuperscript{6}. Yet, both Schmitt and Peterson agree on the analogy between religious and political. It is this analogy that served as a basis for what was called by the ancients \textit{theologia civile}, and by modern scholars, \textit{civil religion}. The ancient sense goes back as far as Varro (116–27 B.C.), who was commented on by Saint Augustine\textsuperscript{7}, and it refers to the public worship of the gods of nations which was ensured by all ancient cities. One of the main functions of this public service of the gods of the cities was to offer social cohesion. Based on this function of unifying the community, the terms “political religions” and “secular religions”, in modern times respectively shaped by Eric Voegelin\textsuperscript{8} and Raymond Aron\textsuperscript{9}, point to political organization. Characteristically, these terms imply the replacement of the redemptive narrative of traditional religion by modern ideological substitutes, which would develop their own redemptive vision. Still, they also imply continuity with the ancient civil theology as the work of Voegelin makes clear. As a scholar of Late Antiquity, for Peterson, \textit{theologia civile} is the \textit{forma mentis} of Hellenistic thinkers. As a theologian, the same Peterson argues that Christianity reshaped the standard ancient understanding about what a religion is and its function. Indeed, early Christian authors distinguished their new faith from the old form of religiosity, and one of the main aspects of this new faith aimed at crit- \textsuperscript{6} An essential article to understand Peterson’s alternative to political theology is E. Peterson, “Witness to the Truth”, in id., \textit{Theological tractates}, pp. 151-183. See also the introduction to the French translation by D. Rance in id., \textit{Témoin de la vérité}, Ad Solem, Gêneve 2015, pp. 7-74. \textsuperscript{7} St. Augustine, \textit{The City of God}, Hendrickson Publishers, Peabody 2009, Book VI, chapter 5. \textsuperscript{8} Erik Voegelin, \textit{Die politischen Religionen}, Bermann-Fischer Verlag, Stockholm 1939. See also E. Gentile, \textit{Le religioni della politica, Fra democrazie e totalitarismi}, Laterza, Roma-Bari 2007. \textsuperscript{9} R. Aron, \textit{The Opium of the Intellectuals}, Doubleday, New York 1957, p. 109, p. 286, is important to emphasize the contribution of this debate of the Russian theologian N. Berdiaev who already from 1935 noticed an opposition and analogy between Marxism and Christianity, see N. Berdyaev, “Marxism and the Conception of Personality”, \textit{Christendom} 5, 2(1935): 251–62. icizing the political dimension of religion. Thus, Peterson invites a reflection on how Hellenistic religious narratives contributed to construct models of sovereignty. Although Christ’s statement “My Kingdom is not of this world” (John 18:36), should imply the eschatological proviso\(^{10}\), some Christian authors like Eusebius continued using these ancient models of sovereignty, and thereby even distorted Christian teachings to better fit into the inherited sovereignty framework. Reading this debate in the context of Schmitt’s adherence to Nationalism Socialism, one can realize some convergencies between the ancient usage of religion as *theologia civile*, and the modern usage as civil religion, whereas both focus on an instrumentalization and subordination of religious piety to the political project. The aim of this article is to open a possible dialogue between the writings of Erik Peterson and Eastern Orthodox theology. To a limited extent this discussion has already started, as I will show in the second part of the article. Additionally, a closer look into Peterson’s theological reflections, especially his deconstruction of the Eusebian sovereign model, based on dogmatic reason extracted from the Church Fathers, deserve further consideration. It might seem counterintuitive that a Protestant converted to Catholicism has something to add to the current discussion in Orthodox theology, but the parallel between Erik Peterson’s criticism of *Deutsche Christen*\(^{11}\) which supported National Socialism and today’s criticism of the Russian world ideology offered by Eastern Orthodox theologians is striking. What is more, due to the 17\(^{th}\) century *cuius regio eius religio* norm, Protestants developed a territorial imagination about faith\(^{12}\), similar in practice to the Orthodox idea of canonical territory\(^{13}\). --- \(^{10}\) See G. Uribbari, “La riserva escatologica, genesi del concetto in Erik Peterson”, *PATH* 12(2013), pp. 273-313 (consulted online 15.01.2022, [https://repositorio.comillas.edu/rest/bitstreams/24954/retrieve](https://repositorio.comillas.edu/rest/bitstreams/24954/retrieve)). \(^{11}\) See on this N. Tenaillon, “Peterson et le recours à la théologie politique”, *Laval théologique et philosophe* 63, 2(2007), pp. 245-257. \(^{12}\) L. Field, “Nota editoriale di Erik Peterson ‘Il Problema del nazionalismo nel cristianesimo antico’”, in: id., *Chiesa antica, giudaismo e gnosi*, Paideia Editrice, Brescia 1959, new edition 2021, p. 190 \(^{13}\) J. Oeldemann, “The Concept of Canonical Territory in the Russian Orthodox Church”, in: T. Bremer, Therefore, both take the risk of sacralizing the local realities and conflate national and religious identity. Furthermore, as a young German man, Peterson fought for some months in World War One. This experience inspired him to write a beautiful pacifist text\textsuperscript{14}. Later, in the 1930s, he faced the nationalism, racism, and imperialism of his home country. Reading between the lines of his exegesis of early Christians texts, one can see the premise of a theology of resistance rooted in the eschatological expectation\textsuperscript{15}. Speaking about how the questions of sovereignty and the diversity of nations and languages has been treated in Late Antiquity, he notes: “the way these problems have been treated in the past, can offer us a new way to address current problems”\textsuperscript{16}. The following part of the study will explore some lines of argumentations from \textit{Monotheism as a Political Problem}\textsuperscript{17} (1935) and from \textit{Problem of Nationalism in Early Christianity} (1951), addressing the question of which religious images are being used to illustrate plurality and unity. Divine monarchy, correlating to the universal empire, and the angels of nations, correlating with the expression of ethnic particularity, appeared as models of sovereignty. They express an analogy between the religious and the political language. Both the model of divine monarchy, as well as the one of the angels of the nations, have been used to support and justify a theological foundation of political order. Hence, they represent examples of \textit{theologia civile}. \begin{footnotes} \item[(ed.)] \textit{Religion and the Conceptual Boundary in Central and Eastern Europe}, Palgrave Macmillan, London 2008. \item[14] E. Peterson, “Le Ciel de l’aumônier militaire”, in: id., \textit{En marge de la théologie}, Cerf, Paris 2015, pp. 85-89. \item[15] See A. Petrache, “Eschaton’s Witness in the Work of Erik Peterson”, in: S. van Erp - J. Haers (eds.), “Theos” and “Polis”. \textit{Political Theology as Discernment}, Peeters, Leuven (forthcoming), pp. 329-343. \item[16] E. Peterson, “Il problema del nazionalismo nel cristianesimo antico”, in: id., \textit{Chiesa antica, giudaismo e gnoi}, Paideia Editrice, Brescia 2021, pp. 197-209 (209). \item[17] E. Peterson, “Der Monotheismus als politisches Problem. Ein Beitrag zur Geschichte der politischen Theologie im Imperium Romanum (1935)”, in: id., \textit{Theologische Traktate}, 1951; id., “Monotheism as a political problem: A contribution to the history of political theology in the Roman Empire”, in: id., \textit{Theological Tractates}, pp. 68-105. \end{footnotes} 2. The Political Theology of Unity under the shadow of the Empire The well-known article *Monotheism* is presenting the historical and theo-philosophical evolution of a sovereignty model, a political order built on a metaphysical fundament. This model evolved throughout centuries, from Aristotle to Eusebius of Caesarea. It developed throughout various cultural and religious backgrounds, always trying to accommodate the desire of unity as manifested through religion and politics. Progressing from the Homeric-Aristotelian, the pseudo-Aristotelian-Hellenistic model, the Jewish Philonian model, to Celsus’ polytheist model, culminating with the Eusebian adaption of Christianity, all of these stages were dictated by the intent of unification and universalism. Peterson’s analysis is very rich in details and, as Borges would say, only a map on the same scale would suffice. Therefore, my article will point only towards the direction and the purpose of his work. The effort of Peterson was to demonstrate that none of these models are compatible with Christianity. His work consists in rejecting any attempt to “transfer pagan theology’s secular monarchy concept to the Trinity”\(^{18}\). According to G. Caronello the relevance of Peterson’s account comes from his description of monotheism, opposed to trinitarian theology, as the civil theology of the present time\(^{19}\), a construction promoted by the Enlightenment but strange to the trinitarian Christian narrative. This was possible because the emerging Constantine church developed a *theologia civilis* not faithful enough to Christian teachings\(^{20}\). Scholarly contributions emphasize the historical limits of Peterson’s account concerning monotheism\(^{21}\). Nevertheless, what his studies point out --- \(^{18}\) E. Peterson, *Monotheism as a political problem*, p. 84. \(^{19}\) G. Caronello, “La critica del monotheismo nel primo Peterson”, in: P. Bettiolo - G. Filoramo (ed.), *Il Dio mortale. Teologie politiche tra antico e contemporaneo*, Morcelliana, Brescia 2002, p. 353. \(^{20}\) *Ibid.*, p. 354. \(^{21}\) M. Rizzi, “Nel frattempo…”, pp. 397-423. concerning *theologia civil* is still relevant today, as I will demonstrate in the second part of my study. The contemporary relevance of Peterson’s work is not from the historical account, but from his argument that power is extracting legitimation from a nonpolitical sphere – from religious, mythological or metaphysical discourses, producing narratives about how to reconcile the plurality of principles acting in the universe. Peterson tackles the question of the fundament of power, describing the *theologia civilis* as serving as a ground for political systems. Several publications\(^{22}\) engage with his *Monotheism* because of this contribution, explaining the metaphysical connection between the religious and the political realm. One of the strategies of Peterson is to identify all ancient authors that are quoting the *Iliad* verse “Beings to do not want to be governed badly, the rule of many is not good, let one be ruler.” In Schmittian terms, this verse is well chosen, since it points towards an enemy: the plurality of sovereigns or the plurality of the principles, and its chaotic outcome. Those who use the Homeric rhetoric contributed to the monotheist sovereign model. Indeed, this model of sovereignty stresses the single rule under the category of a divine monarchy. This means that this model is a political and theological model. Peterson shows that the original Aristotelian model based on the hegemony of a single principle, *mia arche*, is “a political metaphor that transcends a merely aesthetic one”\(^{23}\). It is a choice for metaphysical unity. This monarchical imagination about God remained dominant for centuries, but there are shifts in the way the royal metaphor is presented. Within the treatise *De Mundo*, “the governance of God is imagined after the manner of the Persian Great King”\(^{24}\). Just as the Persian king ruled with the help of his satraps, intermediary --- \(^{22}\) V. Delecroix, *Apocalypse du politique*, Desclée de Brouwer, Paris 2016, G. Gyorgy, “Political theology versus theological politics: Erik Peterson and Carl Schmitt”, *New German Critique* 35, 3(2008), pp. 7-33, M. Borghesi, *Critica della teologia politica. Da Agostino a Peterson: la fine dell’era costantiniana*, Marietti 1820, Bologna 2019. \(^{23}\) E. Peterson, *Monotheism as a political problem*, p. 69. \(^{24}\) *Ibid.*, p.70 principles between God and human beings are introduced here: by polytheists, they were interpreted as inferior deities, whereas Jews identified them as angels. This is essential, because it permits an adaptation of the polytheistic view to the model of the one sovereign. This can be exemplified with Aelius Aristides and his image of the lordship of Zeus, and also with Celsus and his “highest God who permits the legitimacy of traditional religion of diverse people” who are forced to fit into the monotheist construction of the universe. Thus, Peterson states: “Time and time again it is the same idea *Le roi règne mais ne gouverne pas*, the gods are kings, satraps, viceroys, friends of the king or officials, actual Imperium belongs to the highest God, who is compared to the Roman Emperor and to the Persian kings”\(^{25}\). Paradoxically, polytheist religion is forced to enter and support this monotheist model of sovereignty. Furthermore, this Hellenistic adaptation of the model will be the basis of the failed Christian attempt to also force Christianity to fit into this theocivil model. The polytheist version reveals even more the political dimension. Therein many gods participate in the sovereignty of one God. However, they do not overshadow the one God, but as subordinate beings rather confirm his role as a sovereign. Something similar applies to the political dimension. Within the concept of the empire, the plurality of subjected nationalities do not oppose the imperial dimension, but instead confirm its rule. The singular rule of the empire achieves an accommodation of the variety of nations present therein: the Hellenistic, and thereafter the Roman Empire, are examples of the triumph of the *Iliad*’s vision. It is only Israel, because of its radical monotheism and of its idea of one people chosen among all nations, that cannot fit into this model. Therefore, it became an isolated element. However, Philo’s version of divine monarchy is rooted in the Hellenistic model discussed above, nevertheless \(^{25}\) *Ibid.*, p. 83; for examples see all the end notes from 86 to 90. because of his exclusive monotheism, his approach focuses more on the special covenant of the Jewish people. Peterson’s account on Philo is ambiguous\textsuperscript{26}. Apparently, his main concern is to prove the continuity with the peripatetic model without entering into details on the specificity of the covenant with the people of God. One can only speculate whether this is due to the political context. The fact remains that Peterson is passing from Philo’s model as a “politico-theological concept, intended to justify the religious superiority of the Jewish people and their mission to paganism”\textsuperscript{27} to the Christian apologetic usage of the same scheme to “justify the superiority of the people of God who assemble in the church of Christ”\textsuperscript{28}. The text seems to disagree both with these Jewish and Christians usages of the religious dimension to justify a political position. Additionally, a hint is offered by Peterson’s quotation of the \textit{On The Confusion of Tongues}, a treatise in which Philo uses Platonic images to point out that God is surrounded by intermediary powers, who help him to govern the world: “Let us then consider what this is: God, being one, has about him an unspeakable number of powers, all of which are defenders and preservers of everything that is created”\textsuperscript{29}. Philo calls these intermediate powers angels, or \begin{footnotes} \textsuperscript{26} Agamben accused Peterson of antisemitism, G. Agamben, \textit{The Kingdom and the Glory. For a Theological Genealogy of Economy and Government}, Stanford University Press, Stanford 2011, pp. 14-16; however, this accusation has been deconstructed in C. Schmidt, “The Return of the Katechon: Giorgio Agamben contra Erik Peterson”, \textit{The Journal of Religion} 94, 2(2014), pp. 182-203. \textsuperscript{27} E. Peterson, \textit{Monotheism as a political problem}, p. 78. \textsuperscript{28} \textit{Ibidem}. \textsuperscript{29} Philo, \textit{On the Confusion of Tongues}, §. 171 (consulted online 10.01.2023 \url{http://www.earlychristianwritings.com/yonge/book15.html}). Here is the full passage: “In the first place, then, we must say this, that there is no existing being equal in honor to God, but there is one only ruler and governor and king, to whom alone it is granted to govern and to arrange the universe. For the verse – A multitude of kings is never good, Let there one sovereign, one sole monarch be, \{57\}\{Iliad 2.204.\}” is not more justly said with respect to cities and men than with respect to the world and to God; for it is clear from the necessity of things that there must be one creator, and one father, and one master of the one universe. §XXXIV. This point then being thus granted, it is necessary to convert with it also what follows, so as to adapt it properly. Let us then consider what this is: God, being one, has about him an unspeakable number of powers, all of which are defenders and preservers of everything that is created” (\textsection§. 170-171). \end{footnotes} daimons. According to Philo, God as an architect needs these powers to act in the universe. This question of angels is not developed any further in *Monotheism*, however it will be developed after the end of the war, in the 1951 article focusing on the relationships between the people of God and nations. But before addressing the question of angels of the nations, one more step is needed for presenting the Hellenistic version of this model found in writings of Celsus. In the hierarchy of beings developed by Celsus there are no angels. Nevertheless, the Platonic references allow him to speak of gods of nations, gods of *éthnē* and *poleis* (nations and cities). These gods of nations are caretakers of nations and geniuses of nations. In modern terms, the theory of Celsus is that sovereignty is compatible with subsidiarity, its undivided supremacy is compatible with governing/administrative powers. What is the place of Judaism\(^{30}\) in his model? In the endnote 112 Peterson notices that Celsus “has words of recognition for the national character of the Jewish religion. Insofar as Jews adhere to their national worship, they do not act any different from other people”. What Peterson is not saying here is that Celsus’ strategy to present Jewish heritage as any other national heritage is a way for Celsus to level the specificity of Judaism, and therefore to neutralize the monotheist claim. For Celsus, the Jewish God is like any other god. Nevertheless, in the 1951 article, Peterson will state that rigorous monotheists cannot accept this Hellenistic model reproduced by Celsus\(^{31}\). This angelic-satrapic model of sovereignty represents a twist into the unitarian peripatetic model of sovereignty, and this twisted model will be used to support and justify the Roman Empire. Marco Rizzi’s reading of the Celsus-Origen debate suggested that Peterson’s perspective can be summed up as the impossibility --- \(^{30}\) See on this M. Rizzi, “Gli angeli delle nazioni nel dibattito tra Celso e Origene”, *Politica e Religione* (2008), monographical issue: *Angeli delle Nazioni*, pp. 94-105. \(^{31}\) E. Peterson, “Das Problem des Nationalismus im alten Christentum”, in: id., *Frühkirche, Judentum und Gnosis*, Herder, Freiburg im Breisgau 1959, pp. 51-63. to reduce *ad unum* human experiences in the political realm\(^{32}\). Although his interpretation on the work of Origen as non-political is debatable\(^{33}\), pointing towards the Celsus-Origen debate articulates a theological exit from the civil-theological model, and therefore liberates Christians from national concerns. What is more, Celsus presents a serious political problem: how can Christians who refuse the given order of the society be trusted as citizens? Christians, who belong to a different kind of polity that is not recognized by the imperial system, and which is not rooted in national identity, represent a *stasis*\(^{34}\) for the city. *Stasis* represents a division within the sovereignty model, and as a rift it represents the maximum danger for the stability of power – even though the concept cannot be discussed in depth here, this basic notion should be kept in mind. In his answer to Celsus, Origen offers an eschatological prophecy: “national differences will cease on the last day”. So, he is opposing the political model offered by Celsus by a future model of unity. In Origen’s approach, not only national differences will cease at the *eschaton*, but they are already smoothed within this time. Smoothing ethnic distinctions is the true revolution of Christianity in the political realm. The Hellenistic attempt to neutralize national differences by granting them the same importance within the empire is contrasted to the Christian’s way of neutralizing national difference by proposing a new way of being and belonging to *ekklesial politeia*. In other words, Origen’s focus on eschatology in his answer acknowledges that Christians are dangerous for the stability of the city, yet not in the political way expected by Celsus, but by their expectation that structure of powers – be that local or universal – will cease one day under the shadow of the only Kingdom. --- \(^{32}\) M. Rizzi, “Nel frattempo…”, *cit.*, p. 415. \(^{33}\) Id., “Gli angeli delle nazioni”, *cit.*, and “Nel frattempo…”, *cit.* \(^{34}\) On the concept of *stasis* see: L. Pellarin, “Erik Peterson e la στάσις: una legittimazione sovversiva della teologia politica” *Humanitas* 76, 3(2021), pp. 445-477. 3. The Political Theology of Diversity, the shadow of Nationalism The same question of nationalism is treated in a different conceptual language in the 1951\(^{35}\) article *Das Problem des Nationalismus im alten Christentum*. As Sennelart\(^{36}\) points out, Peterson drafted four versions of the same article. A few months after the first publication, another shortened variant appeared in *Hochland*\(^{37}\). The third version dates to 1952, while the last version was extended in its notes and dates to 1959\(^{38}\). I further refer to the Italian translation of this last version\(^{39}\). Sennelart’s analysis, which also serves as an introduction for the reedition of the French text, focuses on the continuity between the Jewish and the Christian images of angels of nations. It brings the ideas of Peterson into discussion with the work of Jean Danielou. In Peterson, angels are ensuring the celebration of an eternal liturgy in heaven; as such they have the role of mediation since the church on earth participates in the cult of heaven. A discussion on the functions of angels is both an ecclesiological-liturgical discussion and an eschatological one, and it allows the church to be defined by this participation in the cult of heavenly Jerusalem. Moreover, by this participation in the *ekklesia*, Christians apply for the citizenship of Heaven: “They have drawn near to the city of the living God, the heavenly Jerusalem, and to countless angels in solemn assembly and to the ekklesia of the firstborn, who are enrolled in heaven as citizens”\(^{40}\). This language of citizenship and assembly are not just metaphors. It also implies that Christians do --- \(^{35}\) E. Peterson, “Das Problem des Nationalismus im alten Christentum”, *Theologische Zeitschrift* 7(1951), pp. 81-91. \(^{36}\) M. Senellart, *À propos des anges des nations*, in : P. Büttgen - A. Rauwel, *Théologie politique et sciences sociales. Autour d’Érik Peterson*, Éditions de l’EHESS, Paris, p. 194. \(^{37}\) E. Peterson, “Das Problem des Nationalismus im alten Christentum”, *Hochland* 44(1951-1952), pp. 216-223. \(^{38}\) Id., “Das Problem des Nationalismus im alten Christentum”, in: id., *Frühkirche, Judentum und Gnosis*, pp. 51-63. \(^{39}\) Id., *Il problema del nazionalismo*, cit. \(^{40}\) Id., “Book of Angels”, in: id., *Theological Tractates*, p. 107. not totally belongs to the earthly *polis*, since “they have no lasting city on earth” (Heb 13:14)\(^{41}\). This relativization of earthly citizenship is essential for Peterson, the liturgical function of the church expresses an eschatological reserve. Although the Church is not replacing the political community, it points towards an alternative way to understand the idea of a universal community. By overcoming political identities mostly expressed by ethnic distinction, Christianity presented itself as a new model of universality. Indeed, the Church appears to be a new *oikumene*, and because of this universalist potentiality, it came to be confused with the Empire. Belonging to the Church came to substitute belonging to a certain nation. In *Das Problem des Nationalismus*, the tension between the plurality of principles acting in the universe, and the one sovereign is expressed in the language of angels serving Christ that can still turn away from their service. According to Peterson, there is an identification between the modern phenomenon of nationalism and the ancient concept of angels of nations\(^{42}\). By nationalism, ancient authors understood the communality of language, laws, religion, and customs of a given community. Often this community points to a common ancestor living in the same land. Angels are to be understood as spiritual principles and intermediate powers who administrate the world. They are sent by God, but their power can be corrupted. Early Christians’ idea of angels of nations derives from Judaism, as pointed out in Peterson’s account on Philo and on his account concerning the Greek translation of Deuteronomy 32:8-9. Nevertheless, this Jewish idea underwent transformation in the Hellenistic period and became influenced by the image of the intermediate powers of satraps, helping the Persian King to govern. According to Peterson, during the Hellenistic period this theory played an ideological \(^{41}\) *Ibidem*. \(^{42}\) Id., *Il problema del nazionalismo*, cit., p. 198. role in the empire of Alexander, as it tried to neutralize the religious and national differences of the empire. Peterson’s interpretation that this theory was shaped to overcome possible conflicts in the empire, has a polemical stance, since it implies that it emerged as a rhetorical strategy. As a mixture between the pagan idea of national gods with the Jewish idea of angels\(^{43}\), the metaphor of angels of nations represents something like a spiritual principle which organizes a given community. In the line of Origen, Peterson accepted the idea of angels of nations as spiritual principles, linking it with the idea of a soul and spirit of a given nation. However, after the coming of Christ, the power of angels of nations has been limited. It is only in the process of revolt against the sovereignty of Christ that these angels of nations can be seen acting. Although they should be principles of order and unity, these spiritual principles might be corrupted by the divinization of the nations. Even though put into the shadow by the coming of Christ, angels of nations might reappear, and thus, they represent a temptation for the Church. Peterson’s account on this topic is covered by Nicoletti, who states that the idea of angels of the nations can only be understood in a nationalistic sense if it is dealing with fallen angels. Nicoletti concludes that “this call upon the angels of the nations suggests a reaffirmation of the limit placed on political sovereignty by the existence of a superior power”\(^{44}\). Hence, the nature of angels remains mediatory, and it is only when angels refuse to subordinate themselves to God that the “demonic nature of power”\(^{45}\) can be seen at work. In modern words, nationalism is a power, but this power has been neutralized by the hegemony of Christ. --- \(^{43}\) An essential role in shaping the concept of the “angels of nations” and connecting it with linguistic diversity is Philo’s work *On the Confusion of Tongues*, § 170-175. Peterson quotes this text explicitly and underlines Philo’s idea of angels/daimon “as servants and minister of the ruler” (*Monotheism as Political Problem*, cit., p. 76). \(^{44}\) M. Nicoletti, “The Angels of the Nations”, in *Theopopedia. Archiving the History of Theologico-political Concepts*, ed. by T. Faitini, F. Ghia, M. Nicoletti, University of Trento, Trento 2015, p. 10 (consulted on line 21.05.2023, [http://theopopedia.lett.unitn.it/?encyclopedia=angels-of-the-nations](http://theopopedia.lett.unitn.it/?encyclopedia=angels-of-the-nations)). \(^{45}\) *Ibidem*. A slightly different approach about the angels of nations can be found in Ratzinger’s commentary to the Celsus-Origen debate\textsuperscript{46}. Therein, he explicitly rejects Peterson’s interpretation, with a letter stating that “angels of the peoples can be viewed both under the aspect of the spirit of the people and under that of the soul of the people”\textsuperscript{47}. For Ratzinger, the angels of nations invoked in Origen cannot be good angels, and they are definitively not vehicles of salvation – at least after the coming of Christ, a view which derives from Origen’s refusal of Celsus’ doctrine concerning Israel. As it has been shown above, Celsus reduced Israel’s identity to a national one, while Origen maintained the special religious role of Israel; since Israel was the only nation which remained under the power of God, and not under the power of angels. Ratzinger concluded through Origen’s work that Israel was never a nation, “but rather the only part of humanity that had not fallen into the prison of national identity”\textsuperscript{48}. Thus, the angels of nations remain usurpers and symbols of disorder, and Christ’s redemptive work brought about the overcoming of the power of angels. Furthermore, Ratzinger’s exegesis on the unity of the nations, which begins with Peterson’s analysis on \textit{Das Problem des Nationalismus}, can help us better understand what is at stake with this question of overcoming nationalism in the ancient world. Ratzinger proposes two ways in which national differences can be overcome: the first is the attempt of the Roman Empire, which tried to extend its rule over all nations, and in this process, it would provide unity. However, the second is the attempt to transcend national differences by baptism in the church, two \textit{oikumenical} projects that confront each other\textsuperscript{49}. As for Peterson, this confrontation is clear in his view. Departing \begin{footnotes} \item J. Ratzinger, \textit{The Unity of the Nations. A Vision of the Church Fathers}, trans. by B. Ramsey, Catholic University of America Press, Washington 2015, p. 44 \item E. Peterson, \textit{Il problema del nazionalismo}, cit., p. 202. \item J. Ratzinger, \textit{The Unity of the Nations}, cit., p. 39. \item \textit{Ibi}, pp. 12-15, p. 106-111. \end{footnotes} from this fundamental observation is why he put so much effort in criticizing Eusebius of Caesarea. The latter engages uncritically with pagan structures to legitimate the power of the emperor. It is not possible to enter the details within the context of Peterson’s criticism on the realized eschatology, and of the confusion between *Pax Romana* and *Pax Christi* presented in *Monotheism*. Nevertheless, is important for our topic to underline that the cessation of national differences is also the kernel of Peterson’s argument against Eusebius. For Peterson, the Eusebian account represents the climax of models of sovereignty wherein “monotheism is the metaphysically corollary of the Roman Empire which dissolves nationalities”. The association between the Roman Empire and the divine monarchy appeared in the context of the supposition of cessations of both polytheism and polyarchies. Eusebius opposes the hegemony of the Roman Empire with national pluralism and presents this hegemony as the implementation of the doctrine of divine monarchy. According to Peterson, the doctrine of the divine monarchy is the foundation of the Eusebian account on politics. At the core of this model of sovereignty lies the analogy according to which Constantine imitates divine monarchy in his earthly rule: “in his own monarchy, he imitated the Divine Monarchy, the *one* king on Earth corresponds to the *one* God, the *one* King in Heaven and the royal Nomos and Logos”. But a novel element also appears here: it is the question of providence, or in Peterson’s terms “theological construction of history”. Within this new paradigm, events in history can be read as fulfilling the will of God. --- 50 See R. Farina, *L’impero e l’imperatore cristiano in Eusebio di Cesarea: la prima teologia politica del cristianesimo*, Pas Verlag, Zurich, 1966, and S. Runciman, *The Byzantine Theocracy*, Cambridge University Press, Cambridge-New York 1977, and for the limits of this interpretation M. Hollerich, “Religion and politics in the writings of Eusebius: Reassessing the first ‘Court Theologian’”, *Church History* 59, 3 (1990), pp. 309-325 and K. Wengst, *Pax Romana and the Peace of Jesus Christ*, SCM Press, London 1987. 51 E. Peterson, *Monotheism as a political problem*, p. 94. 52 *Ibid.*, p. 94. 53 *Ibid.*, p. 97. Hence, as a church historian Eusebius can choose which events allegedly fit into God’s plan. Indeed, this messianic reading of history is the most powerful imaginable legitimation mechanism. Herein, sacred and political history are bound together in a narrative which speaks about the birth of Christ within the Roman Empire. To summarize, it is a model of unity “fashioned by Christians [...] linking empire, peace, monotheism and monarchy”\(^{54}\). The logic of this model consists in choosing between events in history which endorse unity at the religious and political level. Therefore, national “sovereignty is allied intimately with polytheism” and contrasted with the universal monotheist empire. After engaging with the legacy of Eusebius in the writings of Prudentius, Ambrose, Jerome, and Orosius, Peterson comments that one fundamental aspect that these polished models of sovereignty have forgotten: Christian Trinitarian dogma cannot be reduced to the monotheistic narrative developed in Eusebius\(^{55}\). Finally, within Peterson’s last pages of *Monotheism*, he contrasts the Trinitarian framework with all attempts of formulating analogies with the created order, and thereby refuses monotheism as piece of *Reichspolitik*\(^{56}\). To sum up, we had presented some steps in the construction of a model of sovereignty. In their different nuances one can distinguish between the divine monarchy, the monotheist model, the King of Persia model, and the angels of the nations model. Yet, all of them point towards a model of indivisible sovereignty on the political level, based on a religious image of unity. Hence, all of them represent forms of *theologia civilis* and are important, according to Peterson, because they laid the foundation for the Eusebian civil theology. The Church’s habit of endorsing power comes from this historical heritage. Therefore, the first \(^{54}\) *Ibi*, p. 96. \(^{55}\) *Ibi*, p. 102. \(^{56}\) *Ibidem*. step to become free from this heritage is to acknowledge the contrast with the original Christians’ message. This incompatibility is also presented and discussed in detail in other of Peterson’s texts, for example in *Witness to the Truth* or *Christ as Emperor*. By comprehending and critically reflect upon these models of sovereignty, it shall become clear how they have been used continually in various contexts to build (un-)orthodox political theologies. 4. The shadow of Eusebius: Reception of Peterson’s work in the Orthodox Milieu In this paper’s last part, I focus on examples from Eastern Orthodoxy, in which religion and politics are intertwined. Hereby, we can see how Peterson’s criticism of Christianity as being reduced to a civil religion is still valid. However, in countering Orthodox uncritical support for the empire and nation, some critical theological voices appeared\(^{57}\). Some of these voices, representing a theological shift, referred directly to Peterson, while other authors do not refer to Peterson explicitly, yet have similar theological features. Let us consider the following account on the unity of the Church and the Empire of Patriarch Anthony of Constantinople provided within an article of John Meyendorff. Patriarch Anthony (1389-1390, 1391-1397) was asked by the Great Prince of Moscow Basil I whether the commemoration of the Byzantine emperor’s name could be dropped at liturgical service in Russia. ‘My son’ the patriarch answered, ‘you are wrong in saying: We have a church but no emperor. It is not possible for \(^{57}\) For a comprehensive approach on this topic see K. Stoeckl - G. Ingeborg - A. Papanikolaou (eds.), *Political Theologies in Orthodox Christianity: Common Challenges-Divergent Positions*, Bloomsbury, London 2017. Christians to have a church and not to have an empire. Church and empire have a great unity and community, nor is it possible for them to be separated from one another\textsuperscript{58}. This quotation expresses exactly what Peterson named the theopolitical problem of monotheism serving as civil religion. Meyendorff’s article focusing on the connection between eschatology and social responsibility, recognized a certain “ambiguity” in the way the “Byzantine experiment addressed the question of harmony”. He stresses that therein the Church maintained the distinction between empire and religion and did not actually believe in realized eschatology. Yet, in my view, he is not critical enough with the issue of the empire. Speaking about Tsarist Russia, he stresses that the empire adopted a secular western model and only a Byzantine facade. Although Meyendorff is critical concerning the nationalist temptation, recognizing it as a weakness of Orthodoxy, he is comparing the failure of nationalism with the Byzantine empire. Religious nationalism represents for him a “capitulation before a subtle form of secularism, which Byzantium with its universal idea of the empire always avoided”\textsuperscript{59}. He fails to see that the Church’s empowering of nationalism is just a stone’s throw away from local states substituting the empire. The form of the state is less important than the \textit{symphonia} principle. Menyendorff is a renowned theologian and historian, yet his insufficient criticism of the empire signifies the obsessive desire of unity with the political realm, often uncritically present within the Orthodox milieu. However, this desire for unity eventually is challenged by some contemporary Orthodox theologians. Moreover, some of these theologians draw on Peterson’s perspective and its eschatological categories with their works. \textsuperscript{58} E. Barker, \textit{Social and Political Thought in Byzantium}, Clarendon Press, Oxford 1975, p. 195 quoted in: J. Meyendorff, “The Christian Gospel and Social Responsibility: The Eastern Orthodox Tradition in History”, in: F. Forrester Church - T. Francis George (eds.), \textit{Continuity and Discontinuity in Church History}, Brill, Leiden 1979, pp. 118-130. \textsuperscript{59} J. Meyendorff, \textit{The Christian Gospel and Social Responsibility}, p. 200. Just like Peterson, the Greek theologian Christos Yannaras criticizes in his book *Against religion* the process that transforms Christianity into a *religio imperii*\(^{60}\), which offers the political unity of the empire and even offers “new metaphysical understanding of politics”\(^{61}\). Also, for Yannaras, the turning point was represented by Constantine. He named the process begun by the emperor a “religionalization of the ecclesial event”\(^{62}\). By this expression, he understands the transformation of the eucharistic community into a binding religion, ensuring political unity by common worship. Christianity comes to play the same role as the ancient civil religion of the empire, which offered worship to the gods of Rome. Critical to this transformation, Yannaras sees alienation and individualization as consequence to this religionalization, while the church is transformed into a bureaucratic institution serving the common good. Furthermore, he emphasizes that the Orthodox Christian community turned “the catholicity of every local church into an absolute, let themselves slide into the affirmation in practice of ethnophiles, [...] and reconciling themselves to the role of a state religion”\(^{63}\). This means that it is not the national or the imperial forms that are problematic for Yannaras, but any attempt of the Church to legitimize a political order, and therefore to reduce Christianity to a civil religion. While Yanarras himself does not refer to Peterson’s works, there is a scholarly attempt to discuss Peterson’s ecclesiology along with the one of Yannaras. The essay of Pavlo Smytsnyuk\(^{64}\) elaborates how both theologians define the church in relationship with the *polis*. This dis- --- \(^{60}\) C. Yannaras, *Against Religion*, Holy Cross Orthodox Press, Brookline 2013 p. 135-144, the Greek version is from 2006. On Yanarras see also P. Smytsnyuk, “The Politicization of God: Soloviev, Clément and Yannaras on the Theological Importance of Atheism”, *ET-Studies* 13, 2(2022), pp. 265-288. \(^{61}\) C. Yannaras, *Against Religion*, p. 138. \(^{62}\) *Ibi*, p. 139. \(^{63}\) *Ibi*, p. 141. \(^{64}\) P. Smytsnyuk, “A Tortuous Boundary: Polis, Civil Religion, and the Distinction between the Sacred and Profane”, in: A. Bodrov - S. M. Garrett (eds.), *Theology and the Political*, Brill, Leiden 2020, pp. 106-127. cussion points towards the notion of civil religion and how both authors have been criticizing the Church for adopting political aims. Pavlo Smytsnyuk carefully elaborates both similarities concerning the nature of the *ekklesia* and dissimilarities concerning the nature of the political within the works of the Greek and German theologians. Although Yannaras’ account on modernity and human rights is highly problematic\(^{65}\), his distinction between “ecclesiastical event” and church as institution helps advance the discussion on civil religion in the orthodox space. In line with Peterson’s work, Cyril Hovorun, a contemporary Ukrainian theologian, uses the category of civil religion to explain phenomena like the “Russian world”, and the Balkanic style of nationalism\(^{66}\). His work explains how the churches themselves contributed to this construction to ensure social and political benefits. One of the key processes in this civil religion is the Byzantine model of *symphonia*, in which Church and state mutually legitimize one another. In his explanation of the notion of civil religion, Hovorun refers to the Schmitt-Peterson debate\(^{67}\). What is more, he reiterates Peterson’s particular argument that only by reducing Christianity to deism, a theological-political problem might arise: “Civil --- \(^{65}\) See on this I. Kaminis, “The Reception of Human Rights in the Eastern Orthodox Theology: Challenges and Perspectives”, in: H.-P. Grosshans - P. Kalaitzidis, *Politics, Society and Culture in Orthodox Theology in a Global Age*, Brill-Schöningh, Leiden 2022 (consulted online 10.12.2022, [https://brill.com/edcollchap-oa/book/9783657793792/BP000022.xml](https://brill.com/edcollchap-oa/book/9783657793792/BP000022.xml)). \(^{66}\) C. Hovorun, “Civil Religion in the Orthodox Milieu”, in: K. Stoeckl - G. Ingeborg - A. Papanikolaou (eds.), *Political Theologies in Orthodox Christianity*, pp. 253-262, in particular p. 253. Describing the different orthodox churches in Ukraine, Hovorun is using the imperial versus the national paradigm developed by Peterson. He claims: “The divisions between the Orthodox Churches in Ukraine exists because the divided churches associate themselves with the opposed civil religions. The Ukrainian Orthodox Church of the Moscow Patriarchate largely embraces the Russian imperial paradigm, while the Ukrainian Orthodox Church of the Patriarchate of Kiev and the Ukrainian Autocephalous Orthodox Church rely on the nation-based civil religion. It seems that a reconciliation between the Ukrainian churches is impossible until they distance themselves from the civil religions they support” (*ibid.*, p. 259). \(^{67}\) Id., *Politicization of Religion: Eastern Christian Case*, keynote lecture held at the “European Academy of Religion Conference” (Bologna, 22-25.06.2020) available on-line here: [https://www.youtube.com/watch?v=88qNf3LE8tM&t=5s](https://www.youtube.com/watch?v=88qNf3LE8tM&t=5s) (consulted on_18.01.2022). This lecture is particularly interesting because it traces a line from the Schmitt-Peterson debate until today’s attempts to use Christian ideas to legitimize political struggles. religions tend to reduce the Trinitarian or Christological languages to the Unitarian language of one powerful God”\textsuperscript{68}. Like Peterson, his work also has an ecclesiological dimension. Hovorun’s theological project consists of criticizing the ideological narratives embedded within the structures of the Orthodox Church, explaining that the Church’s enhancing of political power affects its nature as an ecclesial community. Even before the beginning of the war against Ukraine, the works of Hovorun were focusing on deconstructing what he called political orthodoxies\textsuperscript{69}. Another engagement with Erik Peterson’s refutation of Schmittian understanding of political theology can be found in the works of the Greek theologian Pantelis Kalaitzidis. He notes: “Peterson suggests that the authentic political teaching of Christianity – based, as it is, on the Trinity – should actually undermine the unholy union of religion and politics, instead of providing it with theological support”\textsuperscript{70}. While Hovorun frames his arguments in the line of the political theology debate, Kalaitzidis is using the Petersonian reading of eschatology, focusing on the aspect of fulfillment of prophecies. He reads the nostalgia for the Byzantine past as a form of realized eschatology; for him, theocracy and neo-nationalism are secularized forms of eschatology that drive the church to its submission to the authority of the state. Furthermore, Kalaitzidis interprets Peterson’s criticism on the Byzantine Empire as political Arianism (Christ subordination to the Father implies a monarchic vision of the universe implying at the political level the support for one king). For him, the latter’s strategy of legitimacy is rooted in Eusebius’ model of the theopolitical construction of a “single sovereign state”\textsuperscript{71}. What is more, he develops Peterson’s idea of the analogy between monotheism and monar- \textsuperscript{68} Id., \textit{Civil Religion in the Orthodox Milieu}, cit., p. 261. \textsuperscript{69} Id., \textit{Political Orthodoxies: The Unorthodoxies of the Church coerced}, edited by A. J. Moyse - S. A. Kirkland, Fortress Press, Washington 2018. \textsuperscript{70} P. Kalaitzidis, \textit{Orthodoxy and Political Theology}, World Council of Churches Publications, Geneva 2012, p. 31. \textsuperscript{71} \textit{Ibid}, p. 27 For the moment these critical engagements from Orthodox theologians with the Orthodox Church remain scholarly perspectives which have not yet been put into practice. However, they served as a basis for a *Declaration on the Russian World*, signed by more than 1400 Orthodox theologians\(^{72}\). This declaration contains insights and points toward an important future direction. It rejects any deification of the state, or any support for Caesaropapism. Orienting Christians’ eyes towards the eschatological fulfillment, the declaration condemns any narrative that replaces the Kingdom of God “with a kingdom of this world, be that Holy Rus’, Sacred Byzantium, or any other earthly kingdom” as non-Orthodox\(^{73}\). The declaration rejects and condemns in a very clear language all forms of government that “deifies the state” as a form of usurpation of Christ’s authority, and states that the Church’s role is to build a theology of resistance against unjust political power. 5. Conclusions Christian Orthodox engagements with the arguments of Erik Peterson are important echoes of his work. They prove that his scholarly and erudite arguments have reached to the core of a deep problem: power needs external legitimation, and because of this need, there is always the risk of formulating civil theologies. The role of theologians in front of this situation is to consolidate a theology of resistance against the Church’s temptation to empower the various political narratives or regimes. Historical and political contexts differ from the time of Peterson’s --- \(^{72}\) *A Declaration on the “Russian World” (Russkii Mir) Teaching*, 13 March 2022 (consulted on line 20.09.2022, [https://publicorthodoxy.org/2022/03/13/a-declaration-on-the-russian-world-russkii-mir-teaching/](https://publicorthodoxy.org/2022/03/13/a-declaration-on-the-russian-world-russkii-mir-teaching/)). \(^{73}\) *Ibidem*. article; however, the theological criticisms still apply today. In the light of these parallels, two important points remain relevant: eschatological proviso towards any political system, and a refusal to read God’s agency in any political event. Focusing on the prophetic nature of the Church, both Peterson and the cited Orthodox theologians agree that theology of history needs to be replaced with a critical theological reflection on political actuality. It is only through deconstructing ideological narratives embedded within the structures of churches that one can overcome the temptation of using Christianity as a civil religion. This temptation is beautifully summarized by Peterson: “As a mystery, power in the final analysis demands to be worshipped”\(^{74}\). This sentence explains the continuity between religion and political language, but also puts them in opposition to each other. Hence, Christians are obliged to reframe the relationship between state and Church, by overcoming of the Byzantine dream of symphonia. Concludingly we can observe that what started as a polemic against Carl Schmitt, Erik Peterson’s deconstruction of political theology serves until today to offer theological instruments in refuting abuses of Christian narratives to legitimize political power. The posterity of Peterson consists in recognizing that reducing Christianity to a civil religion is a constant temptation, a temptation subverted only by a strong eschatological reservation. As has been demonstrated in this paper, the key to the debate is the political quest for religious legitimation. In contrast to the ancient religious function, Peterson’s position implies a refusal of using Christian images of God, providence, order, and history to build political constructions such as empires and nationalism. \(^{74}\) E. Peterson, *Witness to the Truth*, in: id., *Theological tractates*, p. 166.
AN ETERNAL ENIGMA: THE APPLICABLE AND CONSTRUCTABLE FICTIONS OF ELECTRONICS by M.C. Soper, MA Some methods used in electronics are based on models that cannot, in fact, be the case. For example, currents flow continuously and yet consist of the flow of objects: discrete objects, called electrons. Waveforms are commonly denoted by $e^{j\omega t}$, yet, $e^{j\omega t} = \cos \omega t + j \sin \omega t$, where $j$ is defined by $j^2 = -1$, which is not true for any real number; $j$ is imaginary. Fictional models like this build the theory on which circuit calculations are based. These fictions are all mathematical and are used to practical effect in calculations. Recently, fictional circuit elements have also been used, like the nullor and the nullator. Other fictional elements can be joined to the system for ease of circuit calculation; for example, negative time delays and recursive components. Can we build it? Evidently, theoretical use is made of circuit elements that either cannot be made or cannot be manufactured at our present state of knowledge, but may become possible at some future time. We can have physically difficult things to make or theoretically different things to make: a theoretically difficult thing to make is a single, infinitely fast, perfect active switch. A practically difficult thing to make is a minute, large-value, passive inductor. Our fictional elements can: (a) make a theoretical construction of a practically difficult (or even theoretically difficult) elements possible; (b) simplify complex calculations. Why this option? This approach may be preferred since diagrammatic rather than mathematical methods can be used; that is, a simple understanding of a diagram together with fairly basic computational skills can replace very complex techniques in some cases. One more reason for preferring the use of fictional elements is that new circuit elements of a theoretical kind can be specified easily, whereas to describe them in other ways may well be lengthy. For example: Since these are equivalent, $$2z + f = -f,$$ where $f$ is thought of like impedance, so that $$f = -z$$ Thus, recursively we have defined a negative impedance. The recursion in this case is very simple. Note, however, that At this point, we can introduce another fictional topic: instead of time delays, time increments. Obviously, these cannot exist, because a signal would be output before it had been input, but has this equation: $$0(t) = A[I(t) + g\{o(t-dt)\}]$$ Assuming that $A$ and $g$ are reversible and linear: $$A^{-1}[0(t)] = I(t) + g[0(t-dt)]$$ Since this is true at any time, $$A^{-1}[0(t+dt)] = I(t+dt) + g[0(t)]$$ $$g[0(t)] = -I(t+dt) + A^{-1}[0(t+dt)]$$ \[ O(t) = g^{-1} \{ A^{-1}[O(t+dt)] - I(t+dt) \} \] [1] This is a present output with respect to future events; it may be paradoxical, but it has been solved by Richard Feynman. Equation [1] may be obtained from this circuit, where the fictional positive time increment (pti) elements have been included. At this stage, the circuit looks merely eccentric, but consider how this type of transition may be used to reduce the complexity of circuits with active elements and delays in the feedback loop. Note also that we can write: \[ O(t-dt) = g^{-1}A^{-1}[O(t)] - g^{-1}I(t) \] **Releasing practical constraints** Consider Transfer function \( A_0 = 1/(be^{-sT}-ae^{sT}+(1/A)) \) To check further the equivalence of Eq. [1], here is a negative feedback form with linear amplifying and feedback elements: \[ O(t) = A \{ I(t)-f[0(t-dt)] \} \] \[ = A[I(t-dt)-f[0(t-2dt)] \] \[ = A \sum_i A^i f^i[(-1)^i I(t-rdt)], \] which, if \( I \) is constant, \[ = I(A \sum_i A^i f^i(-1)^i \] \[ = AI/(1+Af). \] If \( i = I_c \cos(2\pi ft) \), then, with \( B=f \) (feedback) and \( AB<1 \), we get \[ O(t) = A \sum_i A^i f^i(-1)^i \cos(2\pi ft-2\pi frdt) \] \[ = \frac{A^2 B \cos[2\pi f(t-dr)] + A \cos(2\pi ft)}{A^2 B^2 + 2AB \cos(2\pi f/dt) + 1} \] Note that this can also be written without reference to time as the equivalence of operators: \[ O() = \frac{A[A B \cos(2\pi f/dt) + 1] \cos[2\pi f()] + A^2 B \sin(2\pi f/dt) \sin(2\pi f())}{1 + 2AB \cos(2\pi f/dt) + A^2 B^2} \] Consider the reorganized equation: \[ O(t) = B^{-1} \left[ I(t + dt) + (-A^{-1})O(t + dr) \right] \] \[ = B^{-1} \sum_r (-A^{-1}B^{-1})' \cos(2\pi f(t + dr)(1 + r)] \] \[ = \frac{B^{-1}\cos(2\pi ft) + B^{-2}A^{-1}\cos[2\pi f(t - dr)]}{1 + 2A^{-1}B^{-1}\cos(2\pi fdr) + A^{-2}B^{-2}} \] \[ = \frac{A^2B\cos[2\pi f(t - dr)] + A\cos(2\pi ft)}{1 + 2AB\cos(2\pi fdt) + A^2B^2} \] As expected, the reorganized equation, which gives the present output in terms of future inputs and outputs, is just as valid: this is because the signal is one unvarying sine wave and thus conveys no information. None the less, all wave forms can be made out of sine waves of a range of frequencies summed. Thus, this conclusion holds in general. The main intuitive problem is indicated by the response of a delayed negative feedback linear amplifier as shown. Our analogue for this is a positive feedback amplifier with a pti of the same value, and the input passing through an input stage consisting of a pti before the feedback sum junction is reached. This will have the same response and diagram as above; but the problem for the intuition is that one would expect that a 'pulse-to-come' would create an infinite series of pre-pulses, not an infinite series of post-pulses. The reason for this is that each of the post-pulses consists of the superimposed sine waves of appropriate frequency and phase. Before the stimulus pulse arrives, these sine waves cancel completely, but this cancellation is marred by the arrival of the stimulus pulse, which results in the chain of post-pulses normally seen. From Fourier's Transform Theorem, linearity and the two identical sine wave formulae just shown, we know the outputs will be identical to those illustrated in the spike diagram. Many people will feel very uneasy about including an impossibility in a circuit diagram, since this is something that cannot possibly be made. But, a perfect opamp can also not be made, yet it is frequently used in circuits. The opamp characteristic is approached closely, but never realized. In fact, we could argue that the utility of any opamp comes from the fact that the opamp cannot be made perfect, since we ensure in practice that external components determine the characteristic of the device. That is, the idealization serves a pedagogic and practical purpose. Similarly, a negative impedance is a dependent object, only defined where larger positive impedances exist, since the negative impedance would charge the power supply from no source: another impossibility. However, the understanding of oscillators was greatly facilitated by models using negative impedance. Similarly, positive time increments can exist only where there are normal time delays; they can be constructed by designing some of the circuit to have less delay than the rest. The theoretical utility remains however. **Fixed and passive components** Let us define a passive circuit recursive definition for a perfect diode; for any value of Z—see Fig 11; and for an inverter, assuming we have a voltage summing circuit—see Fig 12. We can use the equivalence in Fig 13, so that \[ \text{out} + \text{in} + f(\text{out}) + \text{out} + \text{out} = \text{out} \] and \[ f(\text{in}, \text{out}) = \text{out} \] so that \[ \text{in} + f^2(\text{in}) + 2f(\text{in}) = 0. \] Let \( \text{in} = x \) and \( f(x) = a_1x + a_2x^2 + a_3x^3 + \ldots \) \[ x(a_1y + a_2y^2 + a_3y^3) + 2(a_1x + a_2x^2 + a_3x^3) = 0, \] and \[ y^3 = a_1(a_1x + a_2x^2) + a_2(a_1x + a_2x^2) + a_3(a_1x + a_2x^2) + \ldots \] Equate the coefficients gives \[ 1 + a_1^2 + 2a_1 = 0, \] which implies \[ a_1 = -1; \quad a_1 = 0; \quad \text{and } i > 1 \] Thus, \( f(x) = -x \), and we have recursively defined an inverter. Having defined an inverter, we can define a voltage-sum device (where output, \( V = V_1 + V_2 \) at the inputs—see Fig 14). That is, \[ f[f(\text{out}, \text{in}), \text{out}] = \text{out}, \] and \[ f(\text{in}, 0) = -\text{out} \] Thus, \( y = f[f(y, x), y] \) and \( -f(x, 0) = y \). From this, the rule \( f(\text{in}, 0) = -\text{in} \) can be deduced (see note at end). We may also define an opamp-like device by a similar strategem: \[ \begin{bmatrix} 2 & R \\ 1/R & 1 \end{bmatrix} = M \] \[ \begin{bmatrix} 5 & 3R \\ 3/R & 2 \end{bmatrix} = M^2 \] \[ \begin{bmatrix} 13 & 8R \\ 8/R & 5 \end{bmatrix} = M^3 \] represents the situation. Note that the \(0^{th}\) term of the series is \[ \begin{bmatrix} 1 & R \\ 1/R & 0 \end{bmatrix} \] which is a matrix interesting in any case. It has the equivalent circuit The usual opamp circuit: has a gain of \((R_1 + R_2)/R_1\). Thus, has a gain of 1. Therefore, we may describe the non-inverting mode of a \(y\) opamp as the inverse of a potential divider (which takes no current and is unloaded from a low impedance source). In a repeated potential divider, let \(r = B/(A+B)\); the new voltage ratio will be \(Br^2/(B+Ar^2)\). When \(A=B\), the sequence \[V_{12}, V_{34}, V_{13}, V_{34}, \ldots\] is obtained for this cascaded potential divider. The sequence is the reciprocal of every other term of the Fibonacci series. The matrix (un-augmented A-matrix) However, here we are concerned mainly with iterative schemes: \[M_0 = \begin{bmatrix} 1 & R \\ 1/R & 0 \end{bmatrix}\] Any voltage divider can then be written \[ \begin{bmatrix} 1 & \pm xZ \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & Z \\ 1/Z & 0 \end{bmatrix} \] But consider first the iterated voltage divider with equal arms; let SET be the set we have: \[M_0 \in \text{SET},\] and if \[M \in \text{SET}, MM_0 \in \text{SEI} \text{ also}\] Let \[ M_1 = M_0^{-1}. \] We then have the iterative scheme: 1. \( M_0 \) is a potential divider of this sort; 2. \( M_1, MM_1 = M \) Thus, we can characterize potential dividers by the extended scheme: \[ \begin{bmatrix} xZ \\ 0 & 1 \end{bmatrix} = X_x, \text{ where } x \text{ is real} \] 1. \( M_0 \in \text{SET}; \) 2. \( M \in \text{SET} \leftrightarrow X_x M \in \text{SET}; \) 3. \( M_1, MM_1 = M; \) 4. These are all, the smallest set defines them. Hence, we have recursively defined a set of potential dividers. Some types of active circuit can be defined as the inverse of this class. Let \( X_x M_0^{-1} = A \) be a potential divider; the \( A^{-1} \) is a non-inverting opamp of a certain type. Thus, we can also recursively define opamp circuits: \( M_0^{-1} \) is whose matrix is: \[ \begin{bmatrix} 0 & Z \\ 1/Z & -1 \end{bmatrix} \] Thus, \( A^{-1} \) is \( M_0^{-1} X_x^{-1} \) and \[ \begin{bmatrix} 1 & xZ \\ 0 & 1 \end{bmatrix}^{-1} = \begin{bmatrix} 1 - xZ \\ 0 & 1 \end{bmatrix} \] \[ M_0^{-2n} = \begin{bmatrix} F_{n+1} & -F_n Z \\ -F_n / Z & F_{n-1} \end{bmatrix} \text{ where } F_n \text{ are the Fibonacci series.} \] \[ M_0^{-2n} X_x^{-1} = \begin{bmatrix} F_{n+1} & -xF_{n+1} Z \\ -F_n / Z & F_{n-1} + xF_n \end{bmatrix} \] Let us choose specific values for \( x \) and see what types of circuit emerge; the result may be constructive of a new approach. A negative value of \( x \) may be chosen to make \( a_{11} \) zero or \( a_{22} \) zero, so that we can choose either infinite mutual conductance or infinite current gain. If we choose infinite current gain, we are close to the opamp characteristic choosing \( Z \) negative for this non-inverting case. The method can also be used for other circuit elements, for example, the ideal full rectifier obeys these rules—see below and Fig. 23. Clearly, the circuit is a form of idempotent. The question of whether these relations are definitive must be checked and, in fact, the definition of a single diode can be used to define these by the obvious method: Here are some more facts—pts again: Let denote a pti with positive time increment of \( dt \), and a normal time delay Then, but a parallel connection is more fraught! is unstable, whereas is, in some circumstances, not First time (previous issue) Second time (this issue) The simple relation can be used to simulate the effect of a positive time increment element in feedback, since \[ A' = g^{-1} \quad \text{and} \quad F' = A^{-1} \] From symmetry it can easily be shown that \[ f(0, 0) = 0 \quad \text{and} \quad f(x, y) = f(y, x) \] Next consider \[ f(x, y) = x + y \] We then have \(-y = x\) and \(y = [(y + x) + y]\), which fit the two equations; thus, \(f(x, y) = x + y\) is one answer. Now consider \(f(x, y) = x + y + d(x, y)\) and let \(d(x, y) = ax + ay + bxy\) approximately for \(x, y\) very small. Then, \[ d(x + y + d(x, y), y) = -x - y - d(x, y) \] produces the result that \(a = -1\), so that \(-(cxy) - y + c(cxy)^2 = -cxy\), which is impossible because \(x, y\), though small, can vary independently of \(xy\). The solution to this is that \(d(x, y) = 0\). Thus, \(f(x, y) = x + y\) **References:** - *Control Theory*, Schaum Outline Series, McGraw-Hill - *Cybernetics*, Norbert Weiner, Di Stefano, McGraw-Hill - *Fourier Analysis*, Murray R Spiegel, Schaum Outline Series, McGraw-Hill - *Tables of Functions*, Emde, Dover - *Introductory Circuit Theory*, Guillemin, Wiley --- **Anatomy of a practical pulse** When it comes to describing a ‘simple’ pulse, or properties thereof, technical literature is sprinkled with vague, misleading and ambiguous terms and definitions. What, for instance, does ‘positive edge’ mean when applied to a negative pulse? Is it the ‘positive-going’ edge, that is, in this case, the last transition, often called the ‘trailing edge’, or is it the first transition, often called ‘leading edge’? Why this confusion of terms and definitions has arisen is not clear. Both the British Standards Institution and the International Electrotechnical Commission have laid down agreed international terms and definitions, which are incorporated in the adjacent drawing. Further details may be obtained from British Standard BS 5698:1989 or IEC Standard IEC469-1:1987. This magazine will continue using the standard terms and definitions applying to a pulse, although for a period the colloquial terms will be added in brackets where deemed necessary. Note also that the term ‘duty cycle’ is not used in connection with pulses; the correct term for the ratio of the pulse duration (width) to the pulse repetition period (pulse spacing) of a periodic pulse train is ‘duty factor’.
FORM - E Form of supply of information to the applicant [Rule 4 (iv)] No. HC.XXXV –01/2018/202 / RTI Dated 1st September, 2018 From: Smt. A. Ajitsaria Registrar (Judl.) & PIO Gauhati High Court To: Sri Arun K. Baruah, House No. 2, Bye-Lane No. 4 (South), Lachit Nagar, Guwahati- 781007 Subject: Information under RTI Act. Ref: Your RTI Application dated 27/06/2018. Regd. Vide ID No. 150/2018 dated 09.08.2018. Sir, With reference to the above, photocopy of the information sought by you, is enclosed herewith. Yours faithfully, [Signature] 1st September, 2018 Registrar (Judicial) & PIO Gauhati High Court FORM 'D' Rejection Order [Rule 4 (iii)] No. HC.XXXV-2/2018/ 203 / RTI Dated 3rd September, 2018 From Smt.A. Ajitsaria Registrar (Judl) & PIO Gauhati High Court To Syed Rizwan Ahmed Naser, Rajahoweli, Ballgaon, P.O. Korokatall, Dist. Jorhat-15, Assam Ref: Your RTI application dated 20.08.2018 ID No. 169/2018 dtd. 01.09.2018 Sir, With reference to your RTI application received on 01.09.2018 (Regd. ID No. 169/2018), this is to inform you that the requisite fee of Rs 10/- paid through IPO no. 43F 101583 ought to have been in favour of Registrar General, Gauhati High Court & payable at Guwahati as per provisions laid down in Rule 9(iii) of the Gauhati High Court (Right to Information) Rules, 2008. Hence, your application is rejected for non-compliance of Rule 9(iii) of the Gauhati High Court (Right to Information) Rules, 2008. The IPOs so forwarded by you is returned herewith. Yours faithfully, Endo: 1. IPO Registrar (Judl) & PIO Gauhati High Court, Guwahati FORM ‘D’ Rejection Order [Rule 4 (iii)] No. HC.XXXV–2/2018/ 2 / RTI Dated 04th, September, 2018 From Smt.A. Ajitsaria Registrar (Judl) & PIO Gauhati High Court To Lovely Chetia, Seuipur, Court Tiniali, Bozaitoli Gaon, P.O.Borguri, Dist. Tinsukia, Assam, Pin-786126 Ref: Your RTI application dated 29.08.2018 ID No. 170/2018 dtd. 04.09.2018 Sir, With reference to your RTI application received on 04.09.2018 (Regd. ID No. 170/2018), this is to inform you that the requisite fee of Rs 10/- paid through IPO no. 35F 449795 ought to have been in favour of Registrar General, Gauhati High Court & payable at Guwahati as per provisions laid down in Rule 9(iii) of the Gauhati High Court (Right to Information) Rules, 2008. Hence, your application is rejected for non-compliance of Rule 9(iii) of the Gauhati High Court (Right to Information) Rules, 2008. The IPOs so forwarded by you is returned herewith. Yours faithfully, Enclo: 1. IPO Registrar (Judl) & PIO Gauhati High Court, Guwahati FORM - E Form of supply of information to the applicant [Rule 4 (iv)] No. HC.XXXV –01/2018/ 12/ RTI Dated 4th September, 2018 From: Smt. A. Altsaria Registrar (Jud.) & PIO Gauhati High Court To: Smt. Malli Devi C/O Sh. Jitran Singh Ag. Uchhin Near PAC Camp Gorakhpur – 273014, Sub: Information under RTI Act. Ref: Your RTI Application dated 09/08/2018, Regd. Vide ID No. 152/2018 dated 09.08.2018. M/s/Lm, With reference to the above, the information pertaining to your query, is provided herein below: Reply to Query: As communicated to you earlier vide this Registry’s Letter No. Hc. XXXV-1/2018/107/RTI dated 23.05.2018, copy of Judgement / Order passed by the Hon’ble High Court in a case is to be obtained by following the prescribed procedure for obtaining certified copy, as laid down in the Gauhati High Court Rules, 2015. The relevant rule is available in public domain at http://ghconline.gov.in__ (please follow the link http://ghconline.gov.in/Document/Chapter13.pdf). Yours faithfully, 4.9.18 Registrar (Judicial) & PIO Gauhati High Court No. HC.XXXV -2/2018/ 21 / RTI Dated 09th September, 2018 From Smt.A.Ajitsaria, Registrar (Jud.) & PIO Gauhati High Court To Prasanta Das, Lawyers' Association Guwahati, Guwahati-1. Sub: RTI Ref: Your RTI application dated 09.08.18 (Regd. ID No. 149/2018 dated 09.08.18) Sir, Reply to query No.(i)&(iii): The information sought by you are ready to be provided as per your request. You are to pay cost of Rs. 25/- (Rupees Twenty Five) only (i.e., Rs. 5/- per sheet for 5 sheets), by Pay Order/DD, in favour of 'Registrar General, Gauhati High Court', payable at Guwahati, as per provision of Section 7(4) of the RTI Act, 2005 and Rule 9(4)(A)(v) of the Gauhati High Court (RTI) Rules, 2018, so as to enable us to send the said documents. You may also pay the amount by cash and collect the same from the RTI Cell, Gauhati High Court. Reply to query No.(ii): Mark obtained by Mr. C. Chaturvedy is 38 and mark obtained by Raushan Lal is 29.33 on 09.07.18 in the interview. Yours faithfully, Registrar (Jud.) & PIO No. HC.XXXV -1/2018/ 214 / RTI Dated 6th September, 2018 From Sri Dipak Kumar Nath Dy.Registrar (Judl.I)&APIO Gauhati High Court To Md. Mahimuddin Ahmed, C/O. Md. Sahazul Ali, Vill. Bhagara(No.1 Khanapara), P.O.- Ganesh Kuwari, Dist. Darrang, Assam, Pin: 784145 Sub: Your RTI application (Regd. Vide ID No. 168/2018 dtd. 29.08.2018) Sir, The information sought by you are ready to be provided as per your request. You are to pay cost of Rs. 120/- (Rupees One Hundred and Twenty) only (i.e.@ Rs. 5/- per sheet for 24 sheets), by Pay Order/DD, in favour of ‘Registrar General, Gauhati High Court’, payable at Guwahati, as per provision of Section 7(3) of the RTI Act, 2005 and Rule 9(X)(A)(iv) of the Gauhati High Court (RTI) Rules, 2008, so as to enable us to send the said documents. You may also pay the amount by cash and collect the same from the RTI Cell, Gauhati High Court. Yours faithfully, Dy.Registrar (Judl) &APIO Gauhati High Court, Guwahati [Signature] [Date] No. HC.XXXV -2/2018/ 215 / RTI Dated 6th September, 2018 From Smt.A.Ajitsaria, Registrar (Judl.) & PIO Gauhati High Court To Plabita Boro, Jyoti Path, Dhirenpara, Guwahati-25, Kamrup(M) Sub: RTI Ref: Your RTI application dated 10.08.18 (Regd. ID No. 154/2018 dated 10.08.18) Sir, The information sought by you are ready to be provided as per your request. You are to pay cost of Rs. 660/- (Rupees Six Hundred and Sixty) only (i.e.@ Rs. 5/- per sheet for 132 sheets), by Pay Order/DD, in favour of ‘Registrar General, Gauhati High Court’, payable at Guwahati, as per provision of Section 7(3) of the RTI Act, 2005 and Rule 9()(A)(v) of the Gauhati High Court (RTI) Rules, 2008, so as to enable us to send the said documents. You may also pay the amount by cash and collect the same from the RTI Cell, Gauhati High Court. Yours faithfully, 6.9.18 Registrar (Judl.) & PIO No. HC:XXXV -2/2018/ 2 / RTI Dated 1st September, 2018 From Smt.A.Ajitsaria, Registrar (Judl.) & PIO Gauhati High Court To Souvik Bhattacharya, C/O. Mr. B.P. Bhattacharjee, Advocate, House No. 174, A.K.Azad Road, Kalapahar, Dist.Kamrup(M),Guwahati 781016. Sub: RTI Ref: Your RTI application dated 10.08.18 (Regd. ID No. 153/2018 dated 10.08.18) Sir, The information sought by you are ready to be provided as per your request. You are to pay cost of Rs. 495/- (Rupees Four Hundred and Ninety Five) only (i.e.@ Rs. 5/- per sheet for 99 sheets), by Pay Order/DD, in favour of ‘Registrar General, Gauhati High Court’, payable at Guwahati, as per provision of Section 7(3) of the RTI Act, 2005 and Rule 9(i)(A)(iv) of the Gauhati High Court (RTI) Rules, 2008, so as to enable us to send the said documents. You may also pay the amount by cash and collect the same from the RTI Cell, Gauhati High Court. Yours faithfully, Received the documents and paid cost of Rs.495/- Souvik Bhattacharya 4th 6.9.18 Registrar (Judl.) & PIO By: 1st 6.9.18 No. HC.XXXV -2/2018/ 2 /9 / RTI Dated 7th September, 2018 From Smt.A.Ajitsaria, Registrar (Judi.) & PIO Gauhati High Court To Smti. Dipika Kalita, Vill- Hahara, PO- Puthimari, PS- Kamalpur, Dist.- Kamrup, Assam. Pin-781380. Sub: RTI Ref: Your RTI application dated 08.08.18 (Regd. ID No. 147/2018 dated 08.08.18) Sir, The information sought by you are ready to be provided as per your request. You are to pay cost of Rs. 935/- (Rupees Nine Hundred and Thirty Five) only (i.e.@ Rs. 5/- per sheet for 187 sheets), by Pay Order/DD, in favour of ‘Registrar General, Gauhati High Court’, payable at Guwahati, as per provision of Section 7(3) of the RTI Act, 2005 and Rule 9(i)(A)(iv) of the Gauhati High Court (RTI) Rules, 2008, so as to enable us to send the said documents. You may also pay the amount by cash and collect the same from the RTI Cell, Gauhati High Court. Yours faithfully, [Signature] 6/9/18 Registrar (Judi.) & PIO No. HC.XXXV –2/2018/ 2.0 / RTI Dated 1st September, 2018 From Smt.A.Ajitsaria, Registrar (Judi.) & PIO Gauhati High Court To Md. Sofizur Rahman, Sijubari Dargah Road, P.O. & P.S.Hatigaon, Dist.Kamrup(M),Guwahati 781038. Sub: RTI Ref: Your RTI application dated 08.08.18 (Regd. ID No. 148/2018 dated 08.08.18) Sir, The information sought by you are ready to be provided as per your request. You are to pay cost of Rs. 410/- (Rupees Four Hundred and Ten) only (i.e.@ Rs. 5/- per sheet for 82 sheets), by Pay Order/DD, in favour of ‘Registrar General, Gauhati High Court’, payable at Guwahati, as per provision of Section 7(3) of the RTI Act, 2005 and Rule 9(i)(A)(iv) of the Gauhati High Court (RTI) Rules, 2008, so as to enable us to send the said documents. You may also pay the amount by cash and collect the same from the RTI Cell, Gauhati High Court. Yours faithfully, Registrar (Judi.) & PIO [Signature] [Date] FORM - E Form of supply of information to the applicant [Rule 4 (iv)] No. HC.XXXV –01/2018/ 29. / RTI Dated 7th September, 2018 From: Sri Dipak Kr Nath Dy-Registrar (Judl.-I) & APIO Gauhati High Court, Guwahati- 781001. To: Mr. I. Vincent Chrishtopher Door No. 3/7-1/4 (3/60B), Kattu Valvu, Chithanoor, Dhalawai Patty, Salem - 636 302 Sub: Information under RTI Act. Ref: Your RTI Application dated 03/08/2018. Regd. Vide ID No. 151/2018 dated 09.08.2018. Sir, With reference to the above, the information pertaining to your query, is provided herein below: Reply to query 1: Certified copy of Judgement / Order passed by the Hon’ble Supreme Court of India, in a case, is to be obtained from the Supreme Court of India, by following the prescribed procedure for obtaining certified copy, as laid down by the Supreme Court of India. Yours faithfully, Dy-Registrar (Judicial-I) & APIO Gauhati High Court 07/09/2018 FORM - E Form of supply of information to the applicant [Rule 4 (iv)] No. HC.XXXV -04/2018/222/RTI Dated 7th September, 2018 From: Sri Dipak Kr Nath Dy-Registrar (Judl.-I) & APIO Gauhati High Court, Guwahati- 781001. To: Sri Keval Joshi 1101, Swastik Society, Sector-27, Gandhinagar- 382028 Re: Your RTI Application dated 18/07/2018. Regd. Vide ID No. 155/2018 dated 10/08/2018. Ref: Letter No. 15011/96/2016-Jus (AU) dated 29/07/2018 of the Section Office & APIO, Government of India, Ministry of Law & Justice, Department of Justice, New Delhi. Sub: With reference to the above, the information pertaining to your query, is provided herein below: Reply to query 1 and 2: Separate data on the basis of conviction and/or acquittal, related to criminal cases (in which the crime had a provision for death penalty), not being maintained, the information could not be provided. Please be noted that, u/s 7(2) of the RTI Act, 2005, the Public Authority is not required to compile information and information as it exists, only is to be provided. Yours Faithfully, Dy-Registrar (Judicial-I) & APIO Gauhati High Court [Signature] [Date] No. HC.XXXV –2/2018/ 2.3 / RTI Dated 12th September, 2018 From Sri Dhrupad Kashyap Das Registrar (Judl.) & PIO Gauhati High Court To Sri Anisur Ahmed, Ward No. 4, Gauripur, P.O. Gauripur, P.S. Gauripur, Dist.-Dhubri, Pin-783331, Assam. Sub: RTI Ref: Your RTI application dated 05.09.18 (Regd. ID No. 174/2018 dated 05.09.18) Sir, The information sought by you are ready to be provided as per your request. You are to pay cost of Rs. 215/- (Rupees Two Hundred and Fifteen) only (i.e.@ Rs. 5/- per sheet for 43 sheets), by Pay Order/DD, in favour of ‘Registrar General, Gauhati High Court’, payable at Guwahati, as per provision of Section 7(3) of the RTI Act, 2005 and Rule 9()(A)(iv) of the Gauhati High Court (RTI) Rules, 2008, so as to enable us to send the said documents. You may also pay the amount by cash and collect the same from the RTI Cell, Gauhati High Court. Yours faithfully, Registrar (Judl.) & PIO 12.9.18 No. HC.XXXV -2/2018/ 224 / RTI Dated 12th September, 2018 From Sri Dhrupad Kashyap Das Registrar (Judl.) & PIO Gauhati High Court To Joyeeta Rajkhowa House No. 23, Ganesh Mandir Path, Pub Sarania, P.O.- Ulubari, Guwahati-781007. Sub: RTI Ref: Your RTI application dated 06.09.18 (Regd. ID No. 175/2018 dated 06.09.18) Sir, The information sought by you are ready to be provided as per your request. You are to pay cost of Rs. 365/- (Rupees Three Hundred and Sixty Five) only (i.e.@ Rs. 5/- per sheet for 73 sheets), by Pay Order/DD, in favour of ‘Registrar General, Gauhati High Court’, payable at Guwahati, as per provision of Section 7(3) of the RTI Act, 2005 and Rule 9(i)(A)(iv) of the Gauhati High Court (RTI) Rules, 2008, so as to enable us to send the said documents. You may also pay the amount by cash and collect the same from the RTI Cell, Gauhati High Court. Yours faithfully, [Signature] Joyeeta Rajkhowa Received the documents and paid Rs. 365 Yours faithfully, [Signature] Registrar (Judl.) & PIO 12-9-18 No. HC:XXXV –2/2018/ 225 / RTI Dated 17th September, 2018 From Sri Dhrupad Kashyap Das Registrar (Judl.) & PIO Gauhati High Court To Smt. Benojeer Choudhury, Vill- Batarashi, Dist.-Karimganj, PIN- 788709, Assam, P.O.-Tillabazar. Sub: RTI Ref: Your RTI application dated 07.09.18 (Regd. ID No. 177/2018 dated 07.09.18) Sir, The information sought by you are ready to be provided as per your request. You are to pay cost of Rs. 440/- (Rupees Four Hundred and Forty) only (i.e.@ Rs. 5/- per sheet for 88 sheets), by Pay Order/DD, in favour of ‘Registrar General, Gauhati High Court’, payable at Guwahati, as per provision of Section 7(3) of the RTI Act, 2005 and Rule 9()(A)(iv) of the Gauhati High Court (RTI) Rules, 2008, so as to enable us to send the said documents. You may also pay the amount by cash and collect the same from the RTI Cell, Gauhati High Court. Yours faithfully, [Signature] Registrar (Judl.) & PIO 17.9.18 FORM ‘E’ Form of supply of information to the applicant [Rule 4 (iv)] No. HC.XXXV -2/2018/ 226 / RTI Dated 7th September, 2018 From Sri Dhrupad Kashyap Das Registrar (Judi.) & PIO Gauhati High Court To Smt. Nabanita Buragohain, House No.-28, Vill. Pillingkata, P.O.-Basistha, Guwahati-29. Sub: RTI Ref: Your RTI application dated 07.09.18 (Regd. ID No. 176/2018 dated 07.09.18) Sir, The information sought by you are ready to be provided as per your request. You are to pay cost of Rs. 420/- (Rupees Four Hundred and Twenty) only (i.e.@ Rs. 5/- per sheet for 84 sheets), by Pay Order/DD, in favour of ‘Registrar General, Gauhati High Court’, payable at Guwahati, as per provision of Section 7(3) of the RTI Act, 2005 and Rule 9(1)(A)(iv) of the Gauhati High Court (RTI) Rules, 2008, so as to enable us to send the said documents. You may also pay the amount by cash and collect the same from the RTI Cell, Gauhati High Court. Yours faithfully, Registrar (Judi.) & PIO 12-9-18 No. HC.XXXV –2/2018/ 127 / RTI Dated 17th September, 2018 From Sri Dhrupad Kashyap Das Registrar (Judl.) & PIO Gauhati High Court To Smt. Alisha Akhtar Gandhibasti, Near Girls’ High School, Bylane No.-1, House No.-67, C/o. Dulu, Guwahati-03. Sub: RTI Ref: Your RTI application dated 14.09.18 (Regd. ID No. 185/2018 dated 14.09.18) Sir, The information sought by you are ready to be provided as per your request. You are to pay cost of Rs. 825/- (Rupees Eight Hundred and Twenty Five) only (i.e.@ Rs. 5/- per sheet for 165 sheets), by Pay Order/DD, in favour of ‘Registrar General, Gauhati High Court’, payable at Guwahati, as per provision of Section 7(3) of the RTI Act, 2005 and Rule 9(i)(A)(iv) of the Gauhati High Court (RTI) Rules, 2008, so as to enable us to send the said documents. You may also pay the amount by cash and collect the same from the RTI Cell, Gauhati High Court. Yours faithfully, Registrar (Judl.) & PIO No. HC.XXXV -7/2018/ 228 Dated 17th September, 2018. From Sri Dhrupad Kashyap Das Registrar (Judl.) & PIO Gauhati High Court To Sri Dip Jyoti Bez, House No. 77, Dharma Sharma Path, P.O.Gopinath Nagar, Birubari, Guwahati, Kamrup(M), Assam-781016. Sub: RTI Ref: Your RTI application dated 07.09.18 (Regd. ID No. 180/2018 dated 07.09.18) Sir, The information sought by you are ready to be provided in response to your request. You are to pay cost of Rs. 520/- (Rupees Five Hundred and Twenty) only (i.e.@ Rs. 5/- per sheet for 104 sheets), by Pay under the heading in favour of 'Registrar General, Gauhati High Court', payable at Guwahati, as per provision of Section 7(3) of the RTI Act, 2005 and Rule 9(2)(A)(c) of the Central High Court (RTI) Rules, 2004, so as to enable us to send the said documents. You may also pay the amount by cash and collect the same from the Registrar, Gauhati High Court. Yours faithfully, [Signature] Registrar (Judl.) & PIO [Date] [Seal] No. HC.XXXV -2/2018/ 929 / RTI Dated 7th September, 2018 From Sri Dhrupad Kashyap Das Registrar (Judl.) & PIO Gauhati High Court To Smt. Shilpa Mour, Sushil Das Bhawan, Paltan Bazar, G.S.Road, Guwahati-781008. Sub: RTI Ref: Your RTI application dated 13.09.18 (Regd. ID No. 182/2018 dated 13.09.18) Sir, The information sought by you are ready to be provided as per your request. You are to pay cost of Rs. 440/- (Rupees Four Hundred and Forty) only (i.e.@ Rs. 5/- per sheet for 88 sheets), by Pay Order/DD, in favour of ‘Registrar General, Gauhati High Court’, payable at Guwahati, as per provision of Section 7(3) of the RTI Act, 2005 and Rule 9(i)(A)(iv) of the Gauhati High Court (RTI) Rules, 2008, so as to enable us to send the said documents. You may also pay the amount by cash and collect the same from the RTI Cell, Gauhati High Court. Yours faithfully, Registrar (Judl.) & PIO FORM-D Rejection Order [Rule 4 (iii)] No. HC.XXXV-01/2018/156 / RTI Dated 20th September, 2018 From: Sri Dipak Kr Nath Dy-Registrar (Judl.-I) & APIO Gauhati High Court, Guwahati- 781001. To : Sri Kishalay Sinha House no : 10, Lane No. 5 Tarun Nagar, Guwahati - 781005 Sub : Reply under RTI Act, 2005. Ref: Your RTI application dated 28/07/2018 Registered ID No. 156/2018 dated 10/08/2018 Sir, With reference to the above, this is to inform you that you have not paid the application fee of Rs. 10/- along with your RTI application. Moreover, the information sought for is not held by PIO-Gauhati High Court. Hence, your application is rejected for not complying with Rule 9 of the Gauhati High Court (Right to Information) Rules, 2008. Yours faithfully, DY-REGISTRAR (JUDICIAL-I) & PJO Gauhati High Court, Guwahati. No. HC.XXXV -2/2018/ 2 / RTI Dated 20th September, 2018 From Sri Dhrupad Kashyap Das Registrar (Judl.) & PIO Gauhati High Court To Sri Samir Biswas, R.G.B.S. Road, Bye Lane 4, H.No. 12, Barsapara, Guwahati-18 Sub: RTI Ref: Your RTI application dated 02.09.18 (Regd. ID No. 178/2018 dated 07.09.18) Sir, The information sought by you are ready to be provided as per your request. You are to pay cost of Rs. 35/- (Rupees Thirty Five) only (i.e.@ Rs. 5/- per sheet for 7 sheets), by Pay Order/DD, in favour of ‘Registrar General, Gauhati High Court’, payable at Guwahati, as per provision of Section 7(3) of the RTI Act, 2005 and Rule 9((A)(iv) of the Gauhati High Court (RTI) Rules, 2008, so as to enable us to send the said documents. You may also pay the amount by cash and collect the same from the RTI Cell, Gauhati High Court. Yours faithfully, Registrar (Judl.) & PIO 20-09-18 FORM ‘E’ Form of supply of information to the applicant [Rule 4 (iv)] No. HC.XXXV –2/2018/ 233 / RTI Dated 26th September, 2018 From Sri Dhrupad Kashyap Das Registrar (Jud.) & PIO Gauhati High Court To Sri Samir Biswas, R.G.B.S. Road, Bye Lane 4, H.No. 12, Barsapara, Guwahati-18 Sub: RTI Ref: Your RTI application dated 02.09.18 (Regd. ID No. 179/2018 dated 07.09.18) Sir, The information sought by you are ready to be provided as per your request. You are to pay cost of Rs. 35/- (Rupees Thirty Five) only (i.e.@ Rs. 5/- per sheet for 7 sheets), by Pay Order/DD, in favour of ‘Registrar General, Gauhati High Court’, payable at Guwahati, as per provision of Section 7(3) of the RTI Act, 2005 and Rule 9(i)(A)(iv) of the Gauhati High Court (RTI) Rules, 2008, so as to enable us to send the said documents. You may also pay the amount by cash and collect the same from the RTI Cell, Gauhati High Court. Yours faithfully, Registrar (Jud.) & PIO 20.09.18 FORM 'D' Rejection Order [Rule 4 (iii)] No. HC.XXXV-1/2018/235/RH Dated 25th September, 2018 From Shri Dhrupad Kashyap Das Registrar (Judl) &PIO Gauhati High Court To Manish Sharma, S/O-Ramakant Sharma, 18/18B, Purani Mandi, Tajganj, Agra-282001. Ref: Your RTI application dated 13.09.2018(Regd.ID No. 197/2018 dated 25.09.2018) Sir, With reference to the above, this is to inform you that the requisite fee of Rs 10/- paid through IPO, ought to have been paid in favour of Registrar General, Gauhati High Court instead of Public Information Officer, O/o the Chief Justice, Gauhati High Court, Guwahati & payable at Guwahati. Hence, your application is rejected for non-compliance of Rule 9(iii) of the Gauhati High Court (Right to Information) Rules, 2008. The IPO so forwarded by you is returned herewith. Yours faithfully, Enclo: 1. IPO in original. Registrar (Judl) & PIO Gauhati High Court, Guwahati From: Sri Dhirupad Kashyap Das Registrar (Judl.) & PIO Gauhati High Court Guwahati-781001 To: Shri Devanshu Khandelwal, 30-b Krishna puri, Mathura, Pin:281001 Sub: Your RTI application dated 30.07.2018 (Regd. Vide ID No. 187/2018 dtd 17.09.2018) Ref: Letter No. 15011/96/2016-Jus(AU) dtd 25.08.18 of the Section Officer & CAPIO, Ministry of Law & Justice (Department of Justice), Government of India. Sir, With reference to the above, the information pertaining to your queries, is provided herein below: Reply to query no. 1: Number of District Court in Assam is 27. Reply to query no. 2: Number of District Courts having air conditioner is 24. Yours faithfully, Registrar (Judl.) & PIO Gauhati High Court FORM - E Form of supply of information to the applicant [Rule 4 (iv)] From: Sri Dhruvpad Kashyap Das Registrar (Judt.) & PIO Gauhati High Court Guwahati-781001 To: Shri Ashok Kumar Upadhyay, R-47, Sector 11, Noida, Uttar Pradesh, Pin-201301 Sub: Your RTI application dated 13.08.2018 (Regd. Vide ID No. 186/2018 dtd 17.09.2018) Ref: Letter No. 15011/96/2016-Lus(AU) dtd 23.08.18 of the Section Officer & CAPIO, Ministry of Law & Justice (Department of Justice), Government of India. Sir, With reference to the above, the information pertaining to your queries, is provided herein below: Reply to query no. 1: At present there is one designated court in the state of Assam, namely, Judge, Designated Court, Assam, Guwahati. Reply to query no. 2: There is one Judge in the Designated Court of Assam. Reply to query no. 3: Cases pending in each of these courts (data as on 31st December of each year) | | 2012 | 2013 | 2014 | 2015 | 2016 | 2017 | 2018(31.08.18) | |----------|------|------|------|------|------|------|----------------| | Designated Court, TADA | 57 | 39 | 31 | 37 | 26 | 32 | 24 | Reply to query no. 4: Cases disposed of by these courts during the year | | 2012 | 2013 | 2014 | 2015 | 2016 | 2017 | 2018(31.08.18) | |----------|------|------|------|------|------|------|----------------| | Designated Court, TADA | 53 | 18 | 08 | 10 | 22 | 15 | 11 | Reply to query no. 5: 13 cases are pending in the Designated Court of Assam for over 10 years. Separate information regarding brief summary of those cases not being maintained, same could not be provided. Yours faithfully, Registrar (Judt.) & PIO Gauhati High Court 23.09.18 No. HC.XXXV -3/2018/ 239 / RTI Dated 1st September, 2018 From Sri Dhrupad Kashyap Das Registrar (Judl.) & PIO Gauhati High Court To Shri Hitesh Chandra Das, Court Master. Gauhati High Court, Guwahati Sub: RTI Ref: Your RTI application dated 18.09.18 (Regd. ID No. 188/2018 dated 18.09.18) Sir, With reference to the above, the photocopy of ACR in respect of yourself for the year 2016 and 2017 as sought for, is enclosed herewith. Yours faithfully, Encl : As stated above. (6 sheets) REGISTRAR (JUDL) & P.I.O. Gauhati High Court, Guwahati 28-09-18 No. HC.XXXV -2/2018/ 29th / RTI Dated September, 2018 From Sri Dhrupad Kashyap Das Registrar (Judl.) & PIO Gauhati High Court To Smti. Himadri Hazarika Mission Compound, Dist. Golaghat, Pin 785621, Sub: RTI Ref: Your RTI application dated 02.09.18 (Regd. ID No. 196/2018 dated 24.09.18) Sir, The information sought by you are ready to be provided as per your request. You are to pay cost of Rs. 575/- (Rupees Five Hundred Seventy Five) only (i.e.@ Rs. 5/- per sheet for 115 sheets), by Pay Order/DD, in favour of ‘Registrar General, Gauhati High Court’, payable at Guwahati, as per provision of Section 7(3) of the RTI Act, 2005 and Rule 9((A)(iv) of the Gauhati High Court (RTI) Rules, 2008, so as to enable us to send the said documents. You may also pay the amount by cash and collect the same from the RTI Cell, Gauhati High Court. Yours faithfully, [Signature] Registrar (Judl.) & PIO No. HC.XXXV –2/2018/ 261 / RTI Dated 29th September, 2018 From Sri Dhrupad Kashyap Das Registrar (Judl.) & PIO Gauhati High Court To Smti. Lovely Chetia, Borguri, Seuipur, Bozaltoli Gaon, Borguri, Tinsukia, Pin- 786126 Sub: RTI Ref: Your RTI application dated 18.09.18 (Regd. ID No. 201/2018 dated 27.09.18) Sir, The information sought by you are ready to be provided as per your request. You are to pay cost of Rs. 440/- (Rupees Four Hundred Forty) only (i.e.@ Rs. 5/- per sheet for 88 sheets), by Pay Order/DD, in favour of ‘Registrar General, Gauhati High Court’, payable at Guwahati, as per provision of Section 7(3) of the RTI Act, 2005 and Rule 9((A)(iv) of the Gauhati High Court (RTI) Rules, 2008, so as to enable us to send the said documents. You may also pay the amount by cash and collect the same from the RTI Cell, Gauhati High Court. Yours faithfully, Registrar (Judl.) & PIO 29.09.18 No. HC.XXXV –2/2018/ 167 / RTI Dated 24th September, 2018 From Sri Dhrupad Kashyap Das Registrar (Judl.) & PIO Gauhati High Court To Sri Dipankar Barman, Vill- Lachima, P.O. Sarthebari, Dist. Barpeta, Assam, Pin- 781307, Sub: RTI Ref: Your RTI application dated 20.09.18 (Regd. ID No. 189/2018 dated 20.09.18) Sir, The information sought by you are ready to be provided as per your request. You are to pay cost of Rs. 410/- (Rupees Four Hundred Ten) only (i.e., Rs. 5/- per sheet for 82 sheets), by Pay Order/DD, in favour of ‘Registrar General, Gauhati High Court’, payable at Guwahati, as per provision of Section 7(3) of the RTI Act, 2005 and Rule 9(l)(A)(iv) of the Gauhati High Court (RTI) Rules, 2008, so as to enable us to send the said documents. You may also pay the amount by cash and collect the same from the RTI Cell, Gauhati High Court. Yours faithfully, Registrar (Judl.) & PIO
On the Reversible O$_2$ Binding of the Fe–Porphyrin Complex HIROYUKI NAKASHIMA,$^1$ JUN-YA HASEGAWA,$^1$ HIROSHI NAKATSUJI$^{1,2}$ $^1$Department of Synthetic Chemistry and Biological Chemistry, Graduate School of Engineering, Kyoto University, Katsura, Nishikyo-ku, Kyoto 615-8510, Japan $^2$Fukui Institute for Fundamental Chemistry, Kyoto University, Takano-Nishihiraki-cho 34-4, Sakyo-ku, Kyoto 606-8103, Japan Received 27 January 2005; Accepted 31 August 2005 DOI 10.1002/jcc.20339 Published online in Wiley InterScience (www.interscience.wiley.com). Abstract: Electronic mechanism of the reversible O$_2$ binding by heme was studied by using Density Functional Theory calculations. The ground state of oxyheme was calculated to be open singlet state [Fe(S = 1/2) + O$_2$(S = 1/2)]. The potential energy surface for singlet state is associative, while that for triplet state is dissociative. Because the ground state of the O$_2$ + deoxyheme system is triplet in the dissociation limit [Fe(S = 2) + O$_2$(S = 1)], the O$_2$ binding process requires relativistic spin-orbit interaction to accomplish the intersystem crossing from triplet to singlet states. Owing to the singlet-triplet crossing, the activation energies for both O$_2$ binding and dissociation become moderate, and hence reversible. We also found that the deviation of the Fe atom from the porphyrin plane is also important reaction coordinate for O$_2$ binding. The potential surface is associative/dissociative when the Fe atom locates in-plane/out-of-plane. © 2006 Wiley Periodicals, Inc. J Comput Chem 27: 426–433, 2006 Key words: reversible O$_2$ binding; heme; potential energy surface; intersystem crossing Introduction Hemoglobin and myoglobin play indispensable roles in the living body: transport and storage of dioxygen. These processes have been studied in detail both theoretically and experimentally.$^{1–10}$ Hemoglobin and myoglobin have the same active site, heme (Fe–porphyrin complex), and the tertiary structure of a subunit of hemoglobin is very similar to that of myoglobin. However, the O$_2$ binding process is quite different between the two molecules. In hemoglobin, the O$_2$ dissociation curve shows the so-called S-form due to the allosteric effect, while in myoglobin the O$_2$ dissociation curve is hyperbolic. In hemoglobin, the present allosteric model proposes that the change of the quaternary structure between T- and R-forms controls the O$_2$ affinity. The T- and R-forms have low and high oxygen affinity, respectively.$^{11,12}$ The O$_2$ affinity of myoglobin and hemoglobin has been studied experimentally from mainly two perspectives: with regard to substitution of the amino acid residue$^{3–5}$ and substitution of heme itself by a similar modified heme (Fe–porphycene, Fe–azaporphyrin, etc.).$^{13–17}$ The former studies concern the allosteric mechanism of hemoglobin. Hemoglobin has four subunits connected each other by salt bridges, hydrogen bonds, and van der Waals interactions. Although there is no firm conclusion on the allosteric effect, it is known that these interactions control the structure of the active site, heme, in hemoglobin.$^{11,12}$ Therefore, it is worth investigating how the structure change affects the O$_2$ binding. In the latter studies, Hayashi et al. reported that the replacement of heme itself (Fe–porphyrin) by the modified heme (Fe–porphycene) in myoglobin had extremely high O$_2$ affinity (compared to the native myoglobin, more than 1000 times).$^{16,17}$ This result shows that the electronic structure of the active site itself, is very important in the O$_2$ affinity. Therefore, quantum mechanical calculation on the active site could draw important conclusion. The electronic structures of oxyheme and deoxyheme have been theoretically studied at several theoretical levels, MNDO/d,$^{18}$ QM/MM,$^{19–21}$ DFT using LSD schemes,$^{22–24}$ CASSCF,$^{25–27}$ CASPT2,$^{28}$ and SAC/SAC-CI$^{29}$ calculations.$^{30}$ These studies mainly addressed the electronic structures of oxyheme and deoxyheme but not the change in the electronic structure during the O$_2$ binding process. In this study, we focus the O$_2$ binding process. The electronic structures of oxyheme and deoxyheme and their stabilities are rather subtle problems, because of the existence of many possible spin states and the electron correlations. Therefore, Correspondence to: H. Nakatsuji; e-mail: email@example.com Contract/grant sponsor: the Grant for Creative Scientific Research from the Ministry of Education, Science, Sports and Culture we will discuss these problems, comparing our calculations with several theoretical studies. There are two important aspects in the dioxygen binding process in the active site of myoglobin and hemoglobin: the change in the spin state and the change in the structure of heme.\textsuperscript{31} Intersystem crossing is necessary in the O\textsubscript{2} binding process. The ground states of deoxyheme and O\textsubscript{2} molecule are in quintet (S = 2) and triplet state (S = 1), respectively, and the total system is triplet. In oxyheme, the spin multiplicity becomes low-spin singlet state (S = 0) after the O\textsubscript{2} binding.\textsuperscript{32,33} A large structural change is also seen in the O\textsubscript{2} binding process. Oxyheme has the Fe atom in the same plane as the porphyrin ring, while there are large deviations from the plane in the deoxyheme (myoglobin: 0.3–0.4 Å, hemoglobin: 0.5–0.6 Å).\textsuperscript{34–38} In this study, we investigated these two aspects that could be important in the reversible O\textsubscript{2} binding process in myoglobin and hemoglobin. We studied the electronic structure of oxy-/deoxyheme and the potential energy surface for the O\textsubscript{2} binding process using the Density Functional Theory to understand how these factors control the oxygen affinity. **Computational Details** We studied model systems: O\textsubscript{2}–Fe(II)–Porphin(Por)–Imidazole(Im) for oxyheme and Fe(II)–Por–Im for deoxyheme (Fig. 1). DFT (UB3LYP) calculations were performed with the following basis set and geometries using the Gaussian98 program package.\textsuperscript{39} The basis set used was 6-31g* for Fe, O, and pyrrole N atoms and 6-31g for the other atoms.\textsuperscript{40} To identify the spin-multiplicity of the ground state, we determined the energy-minimum structure of deoxyheme in singlet, triplet, and quintet states and oxyheme in singlet and triplet states. Next, we calculated the potential energy surfaces of the O\textsubscript{2} binding process in the singlet and triplet states as functions of two reaction coordinates: \( d \) (the deviation of the Fe atom from the porphyrin plane) and the distance \( R \) between Fe and O\textsubscript{2} (Fig. 1). We selected 46 points that were placed at intervals of 0.1 Å for coordinate \( d \) and at intervals of 0.2 Å (or 0.1 Å near minimal point) about coordinate \( R \). In this calculation, other atomic coordinates except for \( d \) and \( R \) were changed linearly between the optimized geometry for the singlet state of oxyheme (O\textsubscript{2}-binding state) and that for the triplet state of oxyheme (dissociation limit). We first optimized the atomic coordinates for O\textsubscript{2}-binding state (\( X_{\text{bind}} \)) and dissociation limit (\( X_{\text{dis}} \)). With a parameter \( \lambda \) (0 ≤ \( \lambda \) ≤ 1), the atomic coordinates between the two structures were linearly defined as eq. (1). At each point, the Fe–O\textsubscript{2} distance, \( R \), was changed, keeping all other geometric parameters fixed. \[ X = \lambda X_{\text{bind}} + (1 - \lambda) X_{\text{dis}} . \] (1) **Table 1.** Optimized Geometries and Total Energies of Deoxyheme and Oxyheme in Several Spin States. | | Deoxyheme | | | Oxyheme | | | |------------------------|-----------|----------|----------|---------|----------|----------| | | Quintet\textsuperscript{a} | Triplet | Singlet | Triplet | Singlet\textsuperscript{b} | | Relative energy (kcal/mol) | 0.00 | 0.671 | 6.48 | 8.36 | 0.00 | | Optimized geometry distance (Å) | | | | | | | Fe–Im N | 2.13 (2.134) | 2.21 | 1.91 | 2.14 | 2.07 (2.07) | | Fe–Pyr N | 2.09 (2.075) | 2.01 | 2.00 | 2.09 | 2.01 (1.97–1.99) | | Fe–O | — | — | — | 2.91 | 1.85 (1.75) | | O–O | — | — | — | 1.22 | 1.29 (1.15–1.32) | | Fe–Por plane angle (degree) | 0.429 (0.34) | 0.190 | 0.201 | 0.394 | 0.0253 (0.03) | | Pyr N–Fe–Pyr N | 88.8 | 89.1 | 89.6 | 89.0 | 89.6 | | | 88.7 | 90.4 | 89.8 | 88.9 | 90.7 | | Pyr N–Fe–Im N | 98.6 | 94.2 | 94.7 | 99.1 | 89.5 | | Fe–O–O | — | — | — | 119.7 | 118.1 (129–133) | | Dihedral angle (degree) | | | | | | | Pyr N–Fe–Im N–Im C | 0.204 | 44.8 | 44.9 | 2.67 | 44.2 | \textsuperscript{a}The values in the parenthesis are the X-ray structural data for the biomimetic myoglobin model.\textsuperscript{37} \textsuperscript{b}The values in the parenthesis are X-ray structural data for the biomimetic oxymyoglobin model.\textsuperscript{38} optimized geometry and relative energy in each spin multiplicity. The ground state of deoxyheme was calculated to be a quintet state, and the triplet and singlet states locate 0.67 kcal/mol and 6.48 kcal/mol higher than the quintet state, respectively. Although the energy difference among these states are very small, the present conclusion agrees with the previous experimental study: in a heme model, Fe(II)–OEP(OctaEthylPorphyrin)–(2-MeIm),\textsuperscript{41} and the active sites of myoglobin and hemoglobin protein,\textsuperscript{31–33} the ground-state spin-multiplicity is quintet. The optimized geometry of the quintet state is quite different from those of the triplet and singlet states. In the quintet state, the Fe atom lies out of the porphyrin plane by $d = 0.429$ Å, which is much larger than the cases of the triplet state (0.190 Å) and the singlet state (0.201 Å). The calculated geometry for the quintet state agrees with the results obtained by X-ray crystallographic data for both myoglobin and biomimetic complexes,\textsuperscript{34–38} in which this deviation of Fe distributes around 0.3–0.4 Å (0.34 Å for a biomimetic deoxymyoglobin model\textsuperscript{37}). The electronic reason of the position of the Fe atom is relevant to the occupation of the $d_{x^2-y^2}$ orbital in the quintet state (the $d_{x^2-y^2}$ orbital is unoccupied in the triplet and singlet states). Because the $d_{x^2-y^2}$ orbital has antibonding interaction with the lone pair of the pyrrole N in the porphyrin plane, the out-of-plane position becomes stable. As shown in Table 1, the dihedral angle, Pyr N–Fe–Im N–Im C, of quintet deoxyheme (0.204°) is different from those of triplet (44.8°) and singlet deoxyheme (44.9°). In the quintet state, the $d_{x^2-y^2}$ orbital interacts with the $\pi$ orbital of imidazole, and this interaction results in the change of the dihedral angle. The geometrical parameters agree reasonably well with those of a biomimetic deoxymyoglobin model\textsuperscript{37} as shown in Table 1. The ground state of oxyheme is the singlet state, and the triplet state locates 8.36 kcal/mol higher than the singlet state. As shown in Table 1, the optimized geometry of the singlet state is in reasonable agreement with the experimental X-ray crystallographic data for both myoglobin and biomimetic complexes.\textsuperscript{34–38} The Fe atom locates inside the porphyrin-plane. The distance between Fe and O$_2$ was 1.85 Å. The O—O bond length was 1.29 Å, which is very close to free O$_2$. In the triplet state, the Fe atom lies out of the porphyrin plane by 0.394 Å. The Fe—O and O—O distance is 2.91 and 1.22 Å, respectively. The Fe—O distance of the triplet oxyheme is by 1.0 Å larger than that of the singlet oxyheme. The imidazole plane is parallel to the Fe–pyrrole N plane in contrast to the 45° rotated structure in the singlet state. These results indicate that the electronic structure of the triplet ground state is described as Fe(S = 2) + O$_2$(S = 1): the electronic **Table 2.** O$_2$ Affinity of Heme in Single and Triplet States and in Different Deviation of the Fe Atom. | Spin multiplicity | Deviation of Fe | Potential curve | Oxygen affinity | |-------------------|-----------------|-----------------|-----------------| | Singlet | in plane | Associative | High | | Singlet | out of plane | Slightly associative | Very low | | Triplet | in plane | Slightly associative | Very low | | Triplet | out of plane | Dissociative | None | We later checked the relaxation effects on the potential energy surface and found that the structural relaxation gave only minor changes in the potential surface as described earlier. **Results and Discussion** *Ground States of Deoxyheme and Oxyheme* First, we investigated the geometries and electronic structures of the ground state of deoxyheme and oxyheme. Table 1 shows the structure of the Fe–Por–Im moiety is very close to that of the quintet state of deoxyheme, Fe(S = 2). Therefore, the triplet state of oxyheme does not bind O$_2$ strongly, as we see in the next section. Most theoretical and experimental studies suggested that the heme binds O$_2$ in singlet ground state.\textsuperscript{18–24,26,27,29} We will discuss the electronic structure of the O$_2$ binding state in the next section in more detail. \textit{The Potential Energy Surface for the O$_2$ Binding Process} We investigated the potential energy surface for the O$_2$ binding process in triplet and singlet states as functions of $d$ and $R$ (see Fig. 1) to understand the mechanism of the O$_2$ binding. As seen in Figure 2a, the potential energy surface of the triplet state is entirely dissociative. In the dissociation limit, the total electronic structure is Fe(S = 2) + O$_2$(S = 1): the ground states of deoxyheme (quintet state) and O$_2$ (triplet state). The Fe atom locates the out-of-plane position in the dissociation limit, as in the ground state of deoxyheme. One exception is the case that the parameter $d$ (distance from the porphyrin plane) is fixed to around zero. The potential curve becomes slightly associative, even though the binding energy is very small. On the other hand, the potential energy surface of the singlet state is entirely associative. In the energy minimal structure, the Fe atom locates in the porphyrin plane. We also found that the character of the potential curve depends on the parameter $d$. With the Fe atom fixed around the porphyrin plane ($d \approx 0.0$) the potential curve is highly associative, while the curve becomes dissociative when the Fe atom is fixed at out of the plane. As explained earlier, the structural parameters except for $R$ and $d$ were linearly changed between the binding structure and the dissociation limit in calculating the potential energy surfaces. We describe here the effect of the structural relaxation to the potential surfaces. To confirm the results shown in Figure 2, we carried out geometry optimization with fixed $R$ and $d$ at structures (1) near to O$_2$ binding state (small $R$ and small $d$), (2) near the dissociation limit (large $R$ and large $d$), and (3) intermediate between them (middle $R$ and middle $d$). First, there was no crucial difference between the partially optimized and linearly changed structures in all cases (1–3). Second, the error in the potential surfaces due to the lack of the structure relaxation is expected to be at most 1 kcal/mol. Because we performed the optimization of all structural parameters for both the binding state and dissociation limit, the linearly changed structures around (1) and (2) would be reliable. For the structure around (3), the energy change due to the relaxation was calculated to be 1.08 kcal/mol in the singlet state, which was the worst example in the examinations. Thus, two important conclusions are derived: (1) Heme binds O$_2$ only in its singlet state, because the potential surface is entirely associative. (2) The potential curve becomes associative when the Fe atom locates close to the porphyrin plane, while the potential curve changes into dissociative when the Fe atom lies out of the plane. The former indicates the importance of the relativistic effect, spin-orbit interaction, in the O$_2$ binding. The latter indicates that the O$_2$ affinity can be controlled by tuning the geometry parameter $d$, the deviation of the Fe atom from the porphyrin ring. Table 2 summarizes the oxygen affinity in terms of the spin multiplicity and the deviation of the Fe atom. \textit{The Electronic Structure and the O$_2$ Affinity} The O$_2$ affinity is mainly controlled by (1) the spin multiplicity of the oxyheme and (2) the deviation of the Fe atom from the porphyrin plane. We analyze these results from the electronic structural view point. \textit{Spin State} As shown in Table 2, oxyheme has high O$_2$ affinity only in the singlet state. In the triplet state of oxyheme, an unpaired electron occupies the Fe($d_{z^2}$) orbital, while the electron is in the Fe($d_{x^2-y^2}$) orbital as paired electron in the singlet state. This would be one reason of the difference in the O$_2$ affinity between the triplet and singlet states. Because the Fe($d_{x^2-y^2}$) orbital and the N(lone pair) of pyrrole have antibonding interaction, the Fe atom prefers to be out of the porphyrin plane. Thus, the electronic structure of the Fe–Por–Im moiety is very similar to that of deoxyheme in the quintet state. \textit{Deviation of the Fe Atom from the Porphyrin Plane} Because the Fe($d_{z^2}$) orbital forms an s-bond with the O$_2$($\pi^*$) orbital, this orbital could be related to the dependency between the O$_2$ affinity and the position of the Fe atom. When the Fe atom locates in-plane, the Fe($d_{z^2}$) orbital cannot interact with the $\pi$ orbitals of the porphyrin ring due to symmetry. However, when the Fe atom locates out of plane, the Fe($d_{z^2}$) orbital can interact with the $\pi$ orbital of the porphyrin ring due to the broken symmetry. This makes the Fe($d_{z^2}$) orbital stable because $\pi$-electron of the Figure 4. The one-dimensional potential energy curve for the $O_2$ binding for the singlet (the dotted line) and triplet (the solid line). (1) Approximate energy-minimum potential curve extracted from Figure 3. The intersystem crossing occurs around $d = 0.2–0.4$ Å. (2, 3) The cross-section view of the Figure 3 at $d = 0.2$ (2) and $d = 0.4$ (3). porphyrin flows into the Fe($d_{z^2}$) orbital. Therefore, the interaction between the Fe($d_{z^2}$) and the $O_2(\pi^*)$ orbitals becomes weaker. **Intersystem Crossing in the $O_2$ Binding Process** In Figure 3, the singlet and triplet potential surfaces were compared. The ground state of oxyheme is singlet in the binding region, while the triplet state is the ground state in the dissociation limit. In addition, the potential surface of the triplet state is entirely dissociative. Therefore, intersystem crossing is indispensable in the $O_2$ binding process. The interaction that allows the crossing is the spin–orbit interaction. In this sense, relativistic effect is essentially important for the $O_2$ binding in the living bodies. Next, we analyze the potential energy surface with the singlet state upon the triplet state, as shown in Figure 3. There is a region where the intersystem crossing occurs. Because the energy levels of single and triplet states become degenerate in this region, the spin conversion is expected to happen easily, even though the spin–orbit interaction is very small. The area of the crossing appears in the range $d = 0.2–0.4$ Å, and there is no crossing in $d = 0.0–0.1$ Å. Because the $O_2$ actual binding process occurs approximately along the energy-minimum pathway, the actual intersystem crossing area would be around $d = 0.2–0.3$ Å and $R = 2.2–2.5$ Å. **On the Reversible $O_2$ Binding** To understand the $O_2$ binding process, we extract energy-minimal $O_2$ binging pathway from Figure 3. As seen in Figure 4a, starting with the dissociation limit, the system in triplet state reaches to the intersystem crossing point by climbing over an energy barrier of 3.0 kcal/mol. At the crossing point, the triplet state converts into the singlet state due to the spin–orbit interaction. The system then proceeds to the $O_2$ binding state on the singlet potential energy surface. Consequently, the system gains 8.4 kcal/mol of the binding energy. In the $O_2$ dissociation, the system in singlet state needs 11.4 kcal/mol to reach the intersystem crossing region. After the spin state changes into the triplet state, oxyheme releases O$_2$ and reaches to the dissociation limit. If the O$_2$ binding occurs only along the singlet surface, the activation energy would be approximately 20 kcal/mol, which makes the O$_2$ release process very difficult. In this sense, the relativistic effect plays an important role in the reversible O$_2$ binding. Using the calculated potential surface, we estimated the equilibrium constant for the O$_2$ binding and compared it with that of human myoglobin. Our result shown in Figure 4a might be close to the situation in human myoglobin, because myoglobin does not show the allosteric effect. \[ K = e^{\frac{\Delta G}{RT}} \approx e^{\frac{\Delta E}{RT}}. \] (2) In eq. (2), we assume that the entropy effects are constant and estimate the equilibrium constant from the binding energy ($\Delta E$) instead of free energy ($\Delta G$). The theoretically estimated equilibrium constant obtained from eq. (2) was $1.8 \times 10^6$ [M$^{-1}$] at 20°C. The experimental value obtained for human myoglobin protein is $1.1 \times 10^6$ [M$^{-1}$] at pH 7.0 and 20°C.\textsuperscript{42} Although we did not consider the effects of the surrounding protein, the estimated equilibrium constant is close to the experimental value. This may indicate that the interaction between heme and O$_2$ dominates the binding process more than that with the surrounding protein residues. Next we examined a situation where an external confinement restricts the geometry: An external force acts on the imidazole, and the Fe atom moves out of the porphyrin ring. This was mimicked **Table 3. Spin Population of Oxyheme in the O$_2$ Binding State, the Crossing Region, and the O$_2$ Dissociation Limit.** | | O$_2$ binding state (singlet) | Crossing region ($d = 0.2$, $R = 2.25$) (singlet) | O$_2$ dissociation limit (triplet) | |------------------------|-------------------------------|--------------------------------------------------|-----------------------------------| | **Gross orbital spin population:** | | | | | $d_{x^2-y^2}$ | 0.0733 | 0.1792 | 0.7992 | | $d_z$ | 0.1306 | 0.4958 | 0.8025 | | $d_x$ | 0.4386 | 0.3470 | 0.9454 | | $d_y$ | 0.4368 | 0.6234 | 0.9281 | | $d_{xy}$ | 0.0288 | 0.0519 | 0.1123 | | **Atomic spin population:** | | | | | Fe | 1.1520 | 1.7703 | 3.8825 | | O$_2$ | −1.0864 | −1.6933 | −1.9944 | with the fixed parameter $d$. Figure 4b and c shows the cross-section view of Figure 3 at $d = 0.2$ and 0.4 Å, respectively. In the case of $d = 0.2$ Å, the valley of the singlet-state potential curve becomes shallow, while there is little change in the triplet-state potential curve. The activation energy for the O$_2$ dissociation significantly decreased to about 5 kcal/mol. Approximately 6 kcal/mol of energy should be used for pulling the Fe–Imidazole moiety toward outside. In the case of $d = 0.4$ Å, the potential curve for the singlet states turns to dissociative, while the triple state shows only minor change in the potential curve. There is almost no energy barrier to dissociate O$_2$ molecule. In summary, owing to the relativistic effect, the spin–orbit interaction in this case, heme obtains high reversibility in the O$_2$ binding. When heme is free from the structural confinement by the protein environment, it is natural for the system to go along the energy-minimal pathway and to bind O$_2$ with the activation barrier of only 3.0 kcal/mol, as shown in Figure 4a. Change of the structural parameter $d$ from in-plane to out-of-plane significantly switches the singlet-state potential curve from associative to dissociative. When one assumes that heme has an external confinement forcing the Fe–Imidazole unit to be out of the porphyrin ring, oxyheme easily releases O$_2$ molecule without large activation energy. **The Electronic Structure of Oxyheme and Its Changes during the O$_2$ Binding** In this section, we describe the electronic structure of oxyheme in the O$_2$ binding process. Figure 5 illustrates the changes of the electronic structure. Table 3 shows the spin population on each orbital and on each atom. In the O$_2$ dissociation limit, the spin multiplicity is triplet: heme and O$_2$ are in quintet (S = 2) and triplet (S = 1) states, respectively (Mulliken spin population: Fe: 3.8825, O$_2$: −1.9944). The O$_2$ molecule approaches to the intersystem crossing point, and the spin multiplicity converts into the singlet state. In this transition, an electron in the $d_{x^2-y^2}$ orbital flips its spin state and moves to the $d_{xz}$ orbital. This is seen in Table 3. The spin population of the $d_{x^2-y^2}$ orbital 0.80 decreases to 0.18 and, that of the $d_{yz}$ orbital 0.95 decreases to 0.35. The O$_2$ molecule is still has two unpaired electron in this structure (spin population on Fe and O$_2$ is 1.77 and −1.69, respectively). Finally, the O$_2$ molecule reaches to the binding state. Heme forms a $\sigma$-bond between the Fe($d_{z^2}$) orbital and O$_2$(π*, $|l\rangle$) orbital, where π*, $|l\rangle$ denotes the π* orbital parallel to the mirror plane (yz plane) of the molecule. In the binding state, there is no apparent π-bond (π-back donation) between the Fe atom and O$_2$ molecule. As shown in Figure 5, the ground state of oxyheme is an open-shell singlet state: a biradical state having unpaired elections in each Fe($d_{xz}$) and O$_2$(π*, $|^{\perp}\rangle$) orbitals (Mulliken spin population: Fe: 1.15, O$_2$: −1.09). The π*, $|^{\perp}\rangle$ orbital denotes π* orbital perpendicular to the mirror plane (yz plane) of the molecule. These two orbitals show little interaction each other, not like the ground state of O$_3$ molecule (a biradical electronic structure with singlet coupling). Therefore, the electronic structure in the ground state of oxyheme is different from Goddard model and characterized as s-bonding between Fe($d_{z^2}$) orbital and O$_2$(π*, $|l\rangle$) and noninteracting unpaired electrons in Fe($d_{xz}$) and O$_2$(π*, $|^{\perp}\rangle$) orbitals. The present result is compared with the previous studies. The DFT studies using LSD schemes also suggested an open-shell singlet ground state, which is the same as our results. In contrast, the CASSCF study and the SAC/SAC-CI study suggested that the Hartree–Fock configuration is the main configuration in the ground state, although the weight of the Hartree–Fock configuration was rather small. These results indicates that these strong configuration interaction describes the biradical electronic structure. Fe($d_{z^2}$) and Fe($d_{xz}$) orbitals are almost equivalent by the symmetry reason. However, Fe($d_{xz}$) orbital is slightly lower than Fe($d_{z^2}$) orbital by the effect of imidazole and Fe–O$_2$ plane. Therefore, the state in which the Fe($d_{z^2}$) orbital is occupied by two electrons is more stable than that the Fe($d_{xz}$) orbital occupied by two electrons. We examined the S$^2$ values of the calculated wave functions. In deoxyheme, the S$^2$ values of the quintet, triplet and singlet states were 6.0, 2.1, and 0.0, respectively. These values are pure spin multiplicities in each spin state. In oxyheme, these values of the O$_2$ dissociation limit [triplet: Fe(S = 2) + O$_2$ (S = 1)] and the O$_2$ binding state [singlet: Fe(S = 1/2) + O$_2$ (S = 1/2)] were 4.0 and 0.9, respectively. In the O$_2$ dissociation limit (triplet), the triplet and higher spin state (septet) are degenerate. The S$^2$ value: 4.0 is the just median of the values of these two states (triplet : 2.0, septet : 6.0). In the O$_2$ binding state, as mentioned above, the noninteracting unpaired electrons are left in Fe($d_{xz}$) and O$_2$(π*, $|^{\perp}\rangle$) orbitals. Therefore, the singlet and higher spin state (triplet) are almost degenerate. The S$^2$ value: 0.9 is also the middle of these two states (singlet: 0.0, triplet: 2.0) the same as in the O$_2$ dissociation limit. This is the drawback of the single-determinant description for the biradical states. Even though the optimized structures agree well with the X-ray ones, more advanced method should be necessary to confirm the potential surfaces. **Conclusion** We investigated the mechanism of the reversible O$_2$ binding in heme by using Density Functional Theoretical calculations. First, we optimized the geometries of deoxyheme and oxyheme in their spin-multiplicities to determine the ground state. In deoxyheme, the ground state is the quintet state where the Fe atom deviates greatly from the porphyrin plane. In oxyheme, the ground state is the singlet state where the Fe atom locates in the porphyrin plane. These results are in good agreement with experimental findings. These facts indicate that the electronic structure of the active site (heme) controls the geometry (planarity), rather than the surrounding protein effects. Next, we studied the potential energy surfaces as functions of the deviation of the Fe atom from the porphyrin ring and the Fe–O$_2$ distance. The results indicate that the potential energy surface is entirely associative in singlet state, while it is dissociative in triplet state. The potential curve becomes associative when the Fe atom locates close to the porphyrin plane, while the potential curve changes into dissociative when the Fe atom lies out of the plane. This is because the large deviation of the Fe atom prevents σ bond formation between the Fe atom and O$_2$ molecule. Comparing the potential energy surfaces of the singlet and triplet states, we found the intersystem crossing area ($d$: 0.2–0.3 Å, $R$: 2.2–2.5 Å), where the singlet and triplet states accidentally degenerate. Thus, the O$_2$ binding process proceeds from the triplet to the singlet states due to the spin–orbit interaction. We applied the present potential surface to estimate the equilibrium constant. The calculated $1.8 \times 10^6$ [M$^{-1}$] is close to the experimental value $1.1 \times 10^6$[M$^{-1}$], indicating that the O$_2$ affinity is controlled by the electronic structure of oxyheme rather than the surrounding protein effects. The transition probability by spin–orbit interaction is generally expected to be not as large. However, for the living bodies to survive, the intersystem crossing should be easily accomplished. Therefore, the O$_2$ binding reaction pathway should be firm and stable. It would be interesting to say that the relativistic effect works every time when we breathe. We also studied the potential curve of the O$_2$ binding with the parameter $d$ fixed to 0.2 and 0.4 Å. Although the triplet state was insensitive to the parameter $d$, the singlet state shows significant changes in the potential curve. With the larger $d$, the potential curve becomes shallower. At $d = 0.4$ Å, the potential curve becomes dissociative. These results indicate that the change of $d$, the deviation of the Fe atom from the porphyrin ring, would be important reaction coordinate which controls the O$_2$ affinity. Change of the electronic structure during the binding process was also studied. In the O$_2$ dissociation limit, the whole system in the triplet state includes heme in the quintet state and dioxygen in the triplet state. When O$_2$ approaches to heme and arrives at the intersystem crossing point, the spin state of the system changes from the triplet state to the singlet state by the spin–orbit coupling, so the spin state of heme moiety becomes the triplet state. The Fe($d_{xz}$) and Fe($d_{yz}$) orbitals are SOMO in the triplet state. When the O$_2$ further approaches to heme and arrives at the O$_2$ binding state, the $\sigma$ bond is formed between the Fe($d_{z^2}$) orbital of the Fe atom and the O$_2$ ($\pi^*$) orbital of dioxygen, while there is no strong $\pi$ bond. The electronic structure of O$_2$ binding state is an open-shell singlet state, namely a biradical state with singlet coupling, in which both the $d_{xz}$ orbital of Fe and the one $\pi^*$ orbital of dioxygen have nonzero spin density distribution. There is a strong $\sigma$ bond, but no $\pi$ bond formed between the $d_{xz}$ orbital of Fe and the $\pi^*$ orbital of dioxygen. Therefore, the electronic structure of the O$_2$ binding state is a biradical state with noninteracting singlet coupling, which is different from that of ozone. **Acknowledgment** A part of the computations was performed in the Research Center for Computational Science, Okazaki, Japan. **References** 1. Bernal, J. D.; Fankuchen, I.; Perutz, M. F. *Nature* 1938, 141, 523. 2. Monod, J.; Wyman, J.; Changeux, J. P. *J Mol Biol* 1965, 12, 88. 3. Englander, J. J.; Rumbley, J. N.; Englander, S. W. *J Mol Biol* 1998, 284, 1707. 4. Bettati, S.; Mozzarelli A.; Perutz, M. F. *J Mol Biol* 1998, 281, 581. 5. Kim, H. W.; Shen, T. J.; Ho, N. T.; Zou, M.; Tam, M. F.; Ho, C. *Biochemistry* 1996, 35, 6620. 6. Tokita, Y.; Nakatsuji, H. *J Phys Chem B* 1997, 101, 3281. 7. Jewsbury, P.; Yamamoto, S.; Minato, T.; Saito, M.; Kitagawa, T. *J Phys Chem* 1995, 99, 12677. 8. Obara, S.; Kashiwagi, H. *J Chem Phys* 1982, 77, 3155. 9. Ghosh, A.; Bocian, D. F. *J Phys Chem* 1996, 100, 6363. 10. Jewsbury, P.; Yamamoto, S.; Minato, T.; Saito, M.; Kitagawa, T. *J Am Chem Soc* 1994, 116, 11586. 11. Liddington, R.; Derewenda, Z.; Dodson, G.; Harris, D. *Nature* 1998, 331, 725. 12. Eaton, W. A.; Henry, E. R.; Hofrichter, J.; Mozzarelli, A. *Nat Struct Biol* 1999, 6, 351. 13. Neya, S.; Kaku, T.; Funasaki, N.; Shiro, Y.; Iizuka, T.; Imai, K.; Hori, H. *J Biol Chem* 1995, 270, 13118. 14. Neya, S.; Hori, H.; Imai, K.; Kawamura-Konishi, Y.; Suzuki, H.; Shiro, Y.; Iizuka, T.; Funasaki, N. *J Biochem* 1997, 121, 654. 15. Neya, S.; Tsukubai, M.; Hori, H.; Yonetani, T.; Funasaki, N. *Inorg Chem* 2001, 40, 1220. 16. Hayashi, T.; Dejima, H.; Matsuo, T.; Sato, H.; Murata, D.; Hisaeda, Y. *J Am Chem Soc* 2002, 124, 11226. 17. Matsuo, T.; Dejima, H.; Hirota, S.; Murata, D.; Sato, H.; Ikegami, T.; Hori, H.; Hisaeda, Y.; Hayashi, T. *J Am Chem Soc* 2004, 126, 16007. 18. Taranto, A. G.; Carneiro, J. W. M.; Oliveira, F. G. *J Mol Struct (Theochem)* 2001, 539, 267. 19. Marechal, J.; Barea, G.; Maseras, F.; Lledos, A.; Mouawad, L.; Peraquia, D. *J Comput Chem* 2000, 21, 282. 20. Maseras, F. *New J Chem* 1998, 327. 21. Barea, G.; Maseras, F.; Lledos, A. *Int J Quantum Chem* 2001, 85, 100. 22. Rovira, C.; Kunc, K.; Hutter, J.; Ballone, P.; Parrinello, M. *Int J Quantum Chem* 1998, 69, 31. 23. Rovira, C.; Parrinello, M. *Int J Quantum Chem* 1998, 70, 387. 24. Rovira, C.; Parrinello, M. *Int J Quantum Chem* 2000, 80, 1172. 25. Choe, Y.; Hashimoto, T.; Nakano, H.; Hirao, K. *Chem Phys Lett* 1998, 295, 380. 26. Yamamoto, S.; Kashiwagi, H. *Chem Phys Lett* 1989, 161, 85. 27. Yamamoto, S.; Kashiwagi, H. *Chem Phys Lett* 1993, 205, 306. 28. Choe, Y.; Nakajima, T.; Hirao, K.; Lindh, R. *J Chem Phys* 1999, 111, 3837. 29. Nakatsuji, H.; Hasegawa, J.; Ueda, H.; Hada, M. *Chem Phys Lett* 1996, 250, 379. 30. Bytheway, I.; Hall, M. B. *Chem Rev* 1994, 94, 639. 31. Scheidt, W. R.; Reed, C. A. *Chem Rev* 1981, 81, 543. 32. Pauling, L.; Coryell, C. N. *Proc Natl Acad Sci USA* 1936, 22, 210. 33. Montenteau, M.; Reed, C. A. *Chem Rev* 1994, 94, 659. 34. Phillips, S. E. V. *Nature* 1978, 273, 247. 35. Phillips, S. E. V. *J Mol Biol* 1980, 142, 531. 36. Fermi, G. *J Mol Biol* 1975, 97, 237. 37. Montenteau, M.; Scheidt, W. R.; Eigenbrot, C. W.; Reed, C. A. *J Am Chem Soc* 1988, 110, 1207. 38. Jameson, G. M.; Rodley, G. A.; Robinson, W. T.; Gagne, R. R.; Reed, C. A.; Collman, J. P. *Inorg Chem* 1978, 17, 850. 39. Frisch, M. J.; Trucks, G. W.; Schlegel, H. B.; Scuseria, G. E.; Robb, M. A.; Cheeseman, J. R.; Scalmani, G. C.; Burant, J. C.; Mennucci, B. E.; Dapprich, S.; Kudin, K. N.; Millam, J. M.; Daniels, A. D.; Petersson, G. A.; Montgomery, J. A.; Zakrzewski, V. G.; Raghavachari, K.; Ayala, P. Y.; Cui, Q.; Morokuma, K.; Foresman, J. B.; Cioslowski, J.; Ortiz, J. V.; Babone, V.; Stefanov, B. B.; Liu, G.; Liashenko, A.; Piskorz, P.; Chen, W.; Wong, M. W.; Andres, J. L.; Replogle, E. S.; Gomperts, R.; Martin, R. L.; Fox, D. J.; Keith, T.; AlLaham, M. A.; Nanayakkara, A.; Challacombe, M.; Peng, C. Y.; Stewart, J. P.; Gonzalez, C.; Head-Gordon, M.; Gill, P. M. W.; Johnson, B. G.; Pople, J. A. Gaussian98; Gaussian Inc.: Pittsburgh, PA, 1998. 40. Hariharan, P. C.; Pople, J. A. *Theoret Chim Acta* 1973, 28, 213. 41. Kitagawa, T.; Terakoa, J. *Chem Phys Lett* 1979, 63, 443. 42. Springer, B. A.; Sligar, S. G.; Olson, J. S.; Phillips, G. N., Jr. *Chem Rev* 1994, 94, 699. 43. Goddard, W. A., III; Olafson, B. D. *Proc Natl Acad Sci US* 1975, 72, 2335.
This is the Accepted Manuscript version of an article accepted for publication in *Management and Education* following peer review. The version of record, Philip Woods, ‘Authority, power and distributed leadership’, *Management and Education*, Vol 30(4): 155-160, first published online 28 September 2016, is available online via doi: 10.1177/0892020616665779 © 2016 British Educational Leadership, Management & Administration Society (BELMAS) Published by SAGE. Abstract: A much greater understanding is needed of power in the practice of distributed leadership. This article explores how the concept of social authority might be helpful in doing this. It suggests that the practice of distributed leadership is characterised by multiple authorities which are constructed in the interactions between people. Rather than there being a uniform hierarchy (relatively flat or otherwise) of formal authority, organisational members may be ‘high’ in some authorities, ‘low’ in others, and people’s positioning in relation to these authorities is dynamic and changeable. The article maps different forms of authorities, provides illustrations from educational institutions, and concludes with implications for educational leadership. A key conclusion is that everyone is involved in the ongoing production of authorities by contributing to who is accepted as or excluded from exercising authority and leadership. One of the critiques of distributed leadership (DL) is that, although it sounds as if it may be more fair, even democratic, in practice this is not necessarily the case. Many accounts and investigations of DL lack a critical, questioning approach to power. Lumby (2013: 583) concludes that the ‘central issue of power surfaces only superficially, if at all, in much of the literature’, yet we should not underestimate the power of DL to ‘enact inequality’ through the unthinking acceptance - as leadership is distributed - of prevailing assumptions, established power differences and the ‘banal’ everyday marginalisation of certain voices (p592). Research that I have been involved in suggests that the unequal distribution of ‘capitals’ - such as social and professional capitals - is one important way of understanding how some are positioned less well to participate and exercise influence in organisations where efforts are made to distribute leadership (Woods and Roberts 2016: 152). A much greater understanding of power and the practice of DL is needed. In this article I explore how the concept of social authority might be helpful in doing this, drawing on some of the ideas generated by the Authority Research Network\(^1\) and a critical understanding of Weber’s typology of authority. The article suggests that the greater the extent of leadership distribution, the more it makes sense to view the organisation as being characterised by a social authority in perpetual construction. Such a social authority is formed by the interplay of multiple negotiated and contending ‘tributary’ authorities arising from the interactions of groups and individuals. The article begins with a brief reflection on authority and types of power. It then explains the idea of social authority, maps different forms of tributary authorities, provides illustrations from educational institutions and concludes with implications for educational leadership. **Authority and power** One view of authority is to see it as a legitimation of top-down control. This is predicated on a certain view of power. Weber’s (1978: 213) emphasis on understanding the different ways of legitimising ‘subjection to authority’ is framed within a prime concern to understand domination and its reliance on ‘voluntary compliance’ and ‘an interest (based on ulterior motives or genuine acceptance) in obedience’ (p212). Weber’s approach to legitimacy and authority, however, is too narrowly focused on authority as a dominating relationship: domination is only one way of achieving co-ordination amongst people (Woods 2003). I would argue that beginning with the question of co-ordination leads to a more comprehensive understanding of power and the sources of authority. It allows for power to be understood in different ways - as top-down, power over others; as an emergent property, produced through social interactions, struggles and the effects of elements that feature in everyday activities such as texts, talk and the environments we inhabit, highlighted in work such as Ball (2006a: 47); and as power-with, that is shared and co-operative power ‘through and with others’ (Blackmore 1999: 161). From this viewpoint, authority is not just the legitimation of top-down control, but is capable of emerging in diverse ways from different organisational perspectives and positions. Its meanings may be interpreted, contested and reframed. **Social authority** Social authority is the *production* of authorities that occurs in modern times where there is no transcendent source of authority and stable meanings, such as a divine being, sacred text or a single powerful political figure. In the field of authority research, Kirwan (2013) argues that authority as a production (i.e. social authority) is a response to the fragmentation of community. The context is the loss of overriding authority, as highlighted in the writings of Arendt and others (Blencowe 2013, Kirwan 2013). The idea of social authority emphasises the continual creation of legitimised power through practice and social interactions. For the purpose of this article, I view social authority as the constellation of multiple, tributary authorities that emerges from the interplay of complementary and contested legitimations of power within an organisation. Social authority emerges from and shapes the kind and degree of co-ordination actors within the organisation achieve in their practice and decision-making. As the negotiated product of ongoing interactions and the interplay of multiple authorities (forms of legitimised power), it is continually made and re-made over time. I suggest that this idea of ongoing production of authority is especially relevant to organisations that are characterised by more blurred and open forms of leadership, where authority to take and lead initiatives is dispersed. The essential features of social authority - reconfiguring authority ‘as the contingent production of contestations and negotiations’ and the need in times of uncertainty to seek ‘moments of authority’ (p83-84) - seem to resonate with the challenges of leadership that moves away from the traditional certainties of top-down principles of command and control. The nature of the produced social authority will tend to differ between organisational settings, with different degrees of influence being accorded, for example, to overt power-over or power-with. Although there may be an absence of a simple, transcendent authority that overarches all others, this does not mean that there are not dominating authorities or attempts to dominate. Especially significant in contemporary times are technical-rational authority and the rise of performative forms of governance (Murray 2012, Woods 2010). It will help to put these in context by considering the different kinds of authority that can be at play in an organisation, an issue to which I turn next. **Forms of authority** This section draws on earlier work on legitimacies of co-ordination (Woods 2003) and analyses of social authority (Blencowe et al 2013). The aim is to discuss and set out the variety of forms that tributary authorities in an organisation may take. Building on Weber’s classic, three types of authority, and opening the typology beyond the focus on domination, I have argued that there are five types (which I termed legitimacies of co-ordination) (Woods 2003). These are discussed in turn. Each of them has its sub-forms, examples of which are given. *Rational authority.* The most familiar form of rational authority is the reliance on a hierarchical order of rules and rights to direct and oversee organisational activities and purposes (legal-rational authority). Rational authority can also be based in expertise that embodies the rational principles of science and technology (technical-rational authority). The latter has a logic that uses systematic and instrumental, means-end approaches that are able to command respect through a claimed power to understand the world in ways superior to other, traditional ways. Professional expertise benefits from claims to knowledge through rational principles, though other forms of professional authority are possible alongside this, such as communal authority (discussed below) earned through deep and caring relationships (Duncan-Andrade 2009: 10). Communal authority. The emphasis here is on the powerful effect of close ties embedded in social relationships. It includes tradition and charisma (two of Weber’s types of authority). The bonds of belonging and respect create a particular impetus to accept certain requirements or sources of advice and direction as legitimate. Examples of such sources are a community’s norms and values, its traditions, fellow members of the group (such as a profession which has a bond of shared identity), or the ‘great leader’. According to the nature of these sources, power-over or power-with may be predominant. Exchange. This refers to governance through associative relationships and rational agreement where the authority arises from the acceptance of the rules and norms of exchange. Exchange has, according to Weber (1978 [1956]: 41), its theoretically most straightforward expression in the form of economic markets. Exchange may also be seen as a feature of networks which constitute a form of governance. Blencowe (2013: 21) observes that markets ‘bear immense authority’ because they are seen in the liberal political economy as a way of testing businesses and policies against what are perceived as the objective forces of life. Weber (1978 [1956]: 213-214) takes pains to distinguish between economic forces that are accorded validity (and hence authority) and economic power which is the exercise of brute force (e.g. where a monopoly can dictate the terms of contracts). Exchange, like other forms of authority and legitimation, may be characterised by different forms of power. For example, markets may act in ways that result in economic forces being experienced as a power-over the self and others. The spread of market principles into everyday working relationships, as in many public services, through policy discourses and building competitive incentives into the structures of public services, exemplifies power as an emergent property affecting people’s identities (Ball 2006b). But exchange may also take alternative forms - such as co-operative organisations and networks (Woodin 2015) - that put power-with to the fore. Democratic legitimation. This is where decisions and actions gain their legitimacy through some kind of participation, dialogue, consent and agreed rights to freedom. Democratic authority may take numerous forms. It may be minimalist, enabling involvement in narrow terms, such as occasional voting for representatives or intermittent processes of consultation. The liberal minimalist model of democracy confines people to choosing amongst competing elites for who best represents their material interests (Dryzek 2004: 148-150). Other models aspire to more elevating ideas of human potential. One of the most influential developments has been the notion of deliberative democracy. In this, the prime purpose of the democratic process is to enable dialogue and interaction to take place in good faith between people, develop greater mutual understanding and overcome the entrenched ideas and interests that hinder ways of making decisions and acting in co-operation with each other (Kahane et al 2010). I have framed my work on leadership and democracy through a notion of holistic democracy. In this model, central to democratic practice is the opportunity for people to grow as whole people and to participate in ways that are based on principles of mutual respect, critical dialogue and independent thinking and a sense of belonging in their community or organisation (Woods 2005, 2011; Woods and Woods 2013). It incorporates the ideas of deliberative and developmental democracy, reflecting people’s capacity to nurture their ‘innate potential excellence’ (Norton, 1996: 62) through self-development, collaborative learning and holistic growth. *Interior authority*. The idea of interior authority directs attention to the various possibilities for authority to be grounded in the person. It is a particularly important dimension of authority in contemporary times because of the greater individualism that has come to characterise governance, and so more space is devoted to it here. I suggested it as a type of legitimacy (Woods 2003) in part because of the concern with how the self in organisations was being shaped by changing forms of governance. Critical studies show how governance reforms can act to re-form people’s identities - as competitive, enterprising agents, for example - so that they exercise control over their selves and others (Rose 1999). It is a form of emergent power which has the effect of implanting authority in the person, but within a re-socialised identity. It is a moot question - though by no means a new one - whether the self is entirely shaped by external conditions or exercises some degree of autonomy. This was another reason to suggest interior authority, so as to ask within the typology of legitimacy: To what extent is there *authoring* by individuals, as distinct from their internalising, transmitting and enacting given social norms, ideas and modes of behaviour (Woods 2003)? Blencowe (2013: 13) refers to authority as a way of deferring responsibility to something else, rather than exercising will or reason. My view, however, is that the capability to examine legitimacy claims critically and decide which should command respect is a form of authority. Interior authority, then, is not necessarily a determined outcome of external forces. I think it is helpful, for example, to distinguish between performative autonomy (heavily shaped by market-based and performative philosophies of governance) and democratic autonomy (critical, independent thinking as a rounded person)\(^2\). I would emphasise two points on interior authority since putting it forward over a decade ago. Firstly, the interior experience is not only inward-focused. The individual is not a separate entity from social relationships, but is interconnected in a dialectical relationship with social structures and relationships (Archer 2003); not an ‘isolated knot’ in the web of relationships, but a person who constitutes the fabric woven around the knot which ‘calls forth’ family, friends, ancestors, successors (Dallmayer 2016: 103-4, drawing on the ideas of Panikkar). To refer to areas internal and external to the person is to use convenient markers rather than binary concepts. Interior authority is a personal engagement with the inward and the outward, and the development of interior authority is a social process. Secondly, following from the first point, the interior experience is inherently connected to the person’s lived experience of the outward. It is not just about internal reasonings, feelings and so on, but is impacted by practical interactions with others, with the physical environment, and so on. Blencowe’s (2013: 13) highlighting of a kind of experiential authority is therefore relevant to interior authority. She highlights, amongst other forms of authority, authoritative understandings by those ‘who have encountered the edges of life - moved close to death, created new lives’ (p21). Dawney (2013: 30) examines a form of experiential knowledge that ‘can and does claim legitimacy in the public sphere, and … has led to the emergence of figures of “experiential authority”: figures who have undergone particular life-changing experiences and are positioned as experts through these experiences'. I see experiential authority as existing too in more ordinary, everyday forms: for example, where experience of dealing with educational problems in a particular community lends to the person an acknowledged authoritative character about that community. This is what I term lived-experience authority, which becomes part of the person's interior authority - the sense of who they are and who they are seen as being. This does not necessarily mean that the lived experience is valid or better than other forms of authority. It may be; it is also true that those with long practical experience may give credence to practices that are honoured more by their durability than their value or effectiveness. **Illustrations of authority and distributed leadership in educational organisations** Table 1 shows the tributary authorities and the examples of sub-forms discussed above. This gives an idea of the breadth and range of forms of authority that are potentially part of the forging of social authority. | Tributary authorities | examples of sub-forms | |-----------------------|-----------------------| | rational authority | bureaucratic (legal-rational) | | | scientific/technical (technical-rational) | | | professional expertise | | communal authority | traditional | | | charismatic | | | professional identity | | exchange | market | | | network | | | co-operative | | democratic legitimation| minimalist | | | deliberative | | | holistic | | interior authority | performative autonomy | | | democratic autonomy | | | lived-experience | Organisational members may project and be accorded various *individual configurations of authority* which are shaped by personal factors and the person’s interaction and relationships. The most recognisable or readily acknowledged authority in most organisations is the formal hierarchical authority vested in the head of the organisation and other senior leaders, a form of rational authority. Legal-rational authority provides a formal legitimacy for the hierarchy of institutional roles and associated rights and powers within organisations. A senior leader may also benefit from other kinds of authority, such as charismatic presence, a recognised professional expertise, or the legitimation that can arise from lived experience. But so too may others outside the senior positions: in educational institutions, these include students, teachers and support staff, as well as community members and parents. Through personal configurations and ongoing interactions a dynamic *organisational configuration of authorities* is forged - that is, the organisation’s social authority. To illustrate what dynamic organisational configurations may look like, an insight is given into two schools with distributed types of leadership culture, drawing from published accounts. I indicate in these brief summaries where I believe the authorities in Table 1 can be seen. The first is the account of a school developing a collegial and co-operative culture, given by its vice principal (Jones 2015). Successful co-operative activity in its school improvement groups (SIGs) is evident (p81), benefiting from co-operative legitimacy. The school mobilises experience (lived-experience authority) to support newer members of the SIGs, where more experienced teachers mentor less experienced colleagues (p80). Over time shifts in relative authority are seen to occur, as the mentees develop and start to challenge some of the ideas of the more experienced, leading to tensions and sometimes ‘genuinely-heated arguments between colleagues’ (p82). Willingness amongst teachers to make decisions and take risks (to exercise the democratic autonomy of interior authority) can be stalled, however, by a felt need to seek directions from the senior leadership (i.e. to rely on bureaucratic authority) and by external pressures (p81). A peer appraisal system found difficulties in maintaining confidence as some preferred returning to ‘a hierarchical system where they could show their progress to someone who “mattered” ’ (p82). Individuals could be seen to accumulate bureaucratic authority through being adept at sticking to procedural rules and meeting deadlines, though Jones notes that this is not necessarily an indicator of contributing to transformational change (p83). The feeling of belonging in groups - the traditional, communal authority identified by the idea of bonding (Field 2008: 36) - could be exclusionary as well as inclusive: some could ‘find it difficult to “break in” to existing partnerships and sub-groups… [or feel] “excluded”’, with some even rejecting a group ‘claiming that it was an elite team’ (Jones 2015: 81). The latter situation suggests that an amalgam of professional, technical and perhaps scientific authorities, founded in the authority of lived-experience, was built up in some teachers over time and led to their being set apart. Through this, a hierarchy of experiential and professional authority is produced. Sometimes ‘more experienced members could easily undermine the effort and initiatives of the less experienced who were not always valued unless supported by more senior participants’ (p81). In contrast, as noted above, it can be seen that such hierarchies may be disrupted. This occurs as the balance of claimed authority shifts through the accumulation of lived-experience and a greater sense of interior authority and democratic autonomy amongst those who began as less experienced. This is by no means an exhaustive analysis of this school’s social authority and its ongoing production. I would suggest, however, that it shows the social authority produced in this school as one in which co-operative legitimacy is privileged and promoted, and lived-experience and mutual enhancement of professional authority encouraged. Its growing pains and tensions include the appearance and challenging of micro-level experiential and professional hierarchies, as well as a persistence amongst some of the attraction of bureaucratic authority. The second account is a case study of teacher leadership in a school (Scribner and Bradley-Levine 2010). The researchers found a ‘logic that afforded some teachers more authority than others’ (p505). The administrative experience of one teacher in a previous setting was seen as giving him a certain legitimacy in dealing with disciplinary issues with students. That prior role was seen as one where he was a ‘pseudo-boss’ as one teacher put it, and this lent him a degree of authority in the new teacher leadership culture (p505). Here we can see both experiential and bureaucratic authority, borrowed from a previous setting, leading to a particular influence amongst teachers. That teacher was seen, for example, redirecting staff conversations in ways that reinforced the reform model that the school was participating in. Another teacher was seen as a commended, highly awarded teacher and hence as ‘the lead teacher’ (p506): this augmented professional authority gave him legitimacy to exercise influence in terms of the development of new curriculum content. Teachers’ subject knowledge was viewed by teachers as an important factor in the priority and respect they were given in co-developing new courses (p507-8). In each project of course development, the teacher possessing the subject knowledge relevant to that course exercised priority over the colleague who had ‘process’ expertise (in new technology to enhance student engagement). So each project might be said to have its own professional authority hierarchy. A gender-based difference in authority was also noted. The male gender was perceived to have an advantage in disciplinary matters with students. As one female teacher explained: ‘The guys will run the office if [the principal] is out… Absolutely nobody disagrees. In fact, we’ve even talked about it saying, “Thank God that we have three strong male figures in the school”… It’s more difficult for women to get respect from boys’ (p510). This exemplifies how authority is a social product that is accorded through shared practice. It also illustrates how ideas of co-leadership can be interceded by a particular form of communal authority - in this case, deeply held cultural assumptions concerning gender relationships that these teachers perceived as characteristic of the community in which the school was set and of family life. The female teachers were constructed and described as giving support (to the technology-based reform), rather than as leaders (p511-2). Although again not a comprehensive account, the case study offers an insight into the social authority produced by the dynamics of this school. It is one where the group of male teachers is accorded greater legitimacy as influencers and (non-positional) power holders, and this is generated through the amalgam of greater professional authority and the cultural authority accorded to males in discipline and control. These examples illustrate that configurations of authority are contingent and not settled. To a great degree social authority is emergent and not amenable to top-down control. This does not mean that social authority is completely indeterminate. The case of the co-operative school shows how clear and agreed principles (in that case, co-operative ones) can play an important role in shaping what happens. It is also evident that social authority and the emergence of configurations of tributary authorities are not simply the product of cognitive or intellectual perspectives, debates and agreements. Emotions, personal experience and the aesthetic sense of what creates good or bad feelings in relation to the sources of authority play a strong part. They are inherent, defining features of communal authority, for example. Arguably, democratic authority and leadership are embedded in an aesthetic rationality in which mutual affirmation, an intrinsic concern for others and a sense of deep trust characterise the ideal-type of everyday action\(^3\). But even legal-rational authority does not achieve approval simply through explaining the value or importance of the assignment of defined posts in a hierarchical order. It may be underpinned by feelings of security and tradition, for example. For Brigstocke (2013: 108), ‘Authority works by establishing specific affective bonds between authorities and those who obey them. In order to understand how authority works as a technique of power, then, it is necessary to study the ways in which these affective relations are secured’. **Concluding Remarks** This article began with the observation that a much greater understanding of power and the practice of DL is needed. The idea of a social authority that is in perpetual production is one way of beginning to appreciate and analyse the complexities of power and how it is legitimated when attempts are made to make leadership more distributed. There are fundamental implications for anyone interested in DL. Firstly, to understand how DL is played out in different settings, it is necessary to come to grips with the configuration of complementary and competing authorities that characterise those settings. Understanding the types and forms of authority is essential. Setting them out systematically, as this article has sought to do, gives an indication of the range of possibilities. Secondly, aspiring to the goal of fair and inclusive DL does not mean requiring that all equally share authority. Hierarchies may emerge, based on different forms of authority. Not all organisational members will or should be accorded equality in each form of authority. To evaluate the distribution of authorities, several issues are pertinent. For example, one is the ethical judgement made of the basis for an authority. Authority that reflects professional expertise may be judged as admissible, whereas a variety of other bases for distinguishing authority (gender, age, familial relationship with students, and so on) may be contested and raise issues for consideration and debate. Some may be unfairly excluded from being accorded or sharing fully in certain forms of authority. An example might be teachers whose professional expertise is recognised but judged to be less relevant or worthy than others, such as the ‘process’ expertise of teachers in the second school account above. Thirdly, following from the last point, awareness of the complexities of social authority helps in addressing the question of power and DL critically. It leads to questions such as: What authorities predominate? How and where is authority constructed and generated, and by whom? What forms of authority are considered ethically justifiable? It puts on the agenda alternative forms of authority, such as democratic authority and the interior authority of democratic autonomy, alongside familiar forms of rational and communal authority. It poses the question of where and whether the authority of exchange - especially the authority of market-type relations - does and should fit into an organisation’s social authority. The final implication is that everyone is involved in the ongoing production of authorities. Even if some are predominantly involved through reacting or deferring to others - and hence through acceding or withholding consent to authority - all are engaged in some way in determining who is included in or excluded from exercising authority and leadership. 1 http://www.authorityresearch.net 2 These are ideas are set out in an unpublished working paper - Woods, P.A., The Sociality Grounding Democratic Leadership: Holarchic aesthetic rationality, 2015 - which is developing ideas first shared in Woods, P.A., The Struggle for the Soul of Leadership in Future Organisations: Aesthetic rationality and holistic well-being, a paper presented at the Focal Meeting of the World Educational Research Association, Edinburgh, UK, 19-21 November 2014. 3 This is discussed further in the working paper referred to in footnote 2. References Archer, M. S. (2003). *Structure, agency and the internal conversation*. Cambridge: Cambridge University Press. Ball S. J. (2006a) What is Policy? Texts, trajectories and toolboxes. In: Ball, S. J. *Education policy and social class: The selected works of Stephen J. Ball*. London: Routledge. Ball S. J. (2006b) The Teacher’s Soul and the Terrors of Performativity. In: Ball, S. J. *Education policy and social class: The selected works of Stephen J. Ball*. London: Routledge. Blackmore, J. (1991) *Troubling Women*. Buckinghamshire: Open University Press. Blencowe, C. (2013) Biopolitical authority, objectivity and the groundwork of modern citizenship. *Journal of Political Power* 6:1, 9-28 Blencowe, C., Brigstocke, J. and Dawney, L. (2013) Authority and experience. *Journal of Political Power* 6(1): 1-7 Brigstocke, J. (2013): Immanent authority and the performance of community in late nineteenth century Montmartre. *Journal of Political Power* 6(1): 107-126. Dawney, L. (2013) The figure of authority: the affective biopolitics of the mother and the dying man. *Journal of Political Power* 6:1, 29-47. Dryzek, J. S. (2004) Democratic Political Theory. In: Gaus, G. S. and Kukathas, C. (eds) *Handbook of Political Theory*. London: Sage. Duncan-Andrade, J.M.R (2009) Note to Educators: Hope Required When Growing Roses in Concrete. *Harvard Educational Review* 79(2): 1-13. Field, J. (2008) *Social Capital (Second Edition)*. Abingdon: Routledge. Jones, S. (2015) Contrived Collegiality? Investigating the efficacy of co-operative teacher development. In: T. Woodin (ed) *Co-operation, Learning and Co-operative Values*. London: Routledge. Kahane, D., Weinstock, D., Leydet, D. and Williams, M. (eds) (2010) *Deliberative Democracy in Practice*. University of British Columbia, Vancouver: UBC Press. Kirwan, S. (2013) On the ‘inoperative community’ and social authority: a Nancean response to the politics of loss. *Journal of Political Power* 6 (1): 69-86. Lumby, J. (2013) Distributed Leadership: The Uses and Abuses of Power. *Educational Management Administration & Leadership* 41(5): 581-597. Murray, J. (2012) Performativity cultures and their effects on teacher educators’ work. *Research in Teacher Education* 2(2): 19–23. Norton, D. L. (1996) *Democracy and Moral Development: A Politics of Virtue*. Berkeley and Los Angeles, CA: University of California Press. Rose, N. (1999) *Powers of Freedom: Reframing Political Thought*. Cambridge: Cambridge University Press. Scribner, S. M. P & Bradley-Levine, J. (2010) The meaning(s) of teacher leadership in an urban high school reform. *Educational Administration Quarterly* 46: 491-522. Weber, M. (1978, originally 1956) *Economy and Society*. Vols. I & II. Berkeley: University of California Press. Woodin, T. (2015) *Co-operation, Learning and Co-operative Values*. London: Routledge. Woods, P. A. (2003) Building on Weber to understand governance: exploring the links between identity, democracy and “inner distance”. *Sociology* 37 (1): 143–163. Woods, P. A. (2005) *Democratic Leadership in Education*. London: Sage. Woods, P. A. (2010) Rationalisation, Disenchantment and Re-Enchantment: Engaging with Weber’s Sociology of Modernity. In: Apple, M., Ball, S. J. and Gandin, L. A. (eds) *International Handbook of the Sociology of Education*. London: Routledge. Woods, P. A. (2011) *Transforming Education Policy: Shaping a Democratic Future*. Bristol: Policy Press. Woods, P.A. and Roberts, A. (2016) Distributed Leadership and Social Justice: Images and meanings from different positions across the school landscape. *International Journal of Leadership in Education* 19 (2): 138-156. Woods, P. A and Woods, G.J. (2013) Deepening distributed leadership: A democratic perspective on power, purpose and the concept of the self. *Vodenje v vzgoji in izobraževanju* (Leadership in... Education) 2: 17–40. English language version online. Available at: <https://herts.academia.edu/PhilipWoods> (accessed 8 January 2016).
General Information 3 INTRODUCTION These symbols, terms, and definitions are in accordance with those currently agreed upon by the JEDEC Council of the Electronic Industries Association (EIA) for use in the USA and by the International Electrotechnical Commission (IEC) for international use. PART I — OPERATING CONDITIONS AND CHARACTERISTICS (INCLUDING LETTER SYMBOLS) Clock Frequency Maximum clock frequency, $f_{\text{max}}$ The highest rate at which the clock input of a bistable circuit can be driven through its required sequence while maintaining stable transitions of logic level at the output with input conditions established that should cause changes of output logic level in accordance with the specification. Current High-level input current, $I_{IH}$ The current into* an input when a high-level voltage is applied to that input. High-level output current, $I_{OH}$ The current into* an output with input conditions applied that according to the product specification will establish a high level at the output. Low-level input current, $I_{IL}$ The current into* an input when a low-level voltage is applied to that input. Low-level output current, $I_{OL}$ The current into* an output with input conditions applied that according to the product specification will establish a low level at the output. Off-state output current, $I_O(\text{off})$ The current flowing into* an output with input conditions applied that according to the product specification will cause the output switching element to be in the off state. Note: This parameter is usually specified for open-collector outputs intended to drive devices other than logic circuits. Off-state (high-impedance-state) output current (of a three-state output), $I_{OZ}$ The current into* an output having three-state capability with input conditions applied that according to the product specification will establish the high-impedance state at the output. Short-circuit output current, $I_{OS}$ The current into* an output when that output is short-circuited to ground (or other specified potential) with input conditions applied to establish the output logic level farthest from ground potential (or other specified potential). Supply current, $I_{CC}$ The current into* the $V_{CC}$ supply terminal of an integrated circuit. *Current out of a terminal is given as a negative value. GLOSSARY TTL TERMS AND DEFINITIONS Hold Time **Hold time, $t_h$** The interval during which a signal is retained at a specified input terminal after an active transition occurs at another specified input terminal. NOTES: 1. The hold time is the actual time between two events and may be insufficient to accomplish the intended result. A minimum value is specified that is the shortest interval for which correct operation of the logic element is guaranteed. 2. The hold time may have a negative value in which case the minimum limit defines the longest interval (between the release of data and the active transition) for which correct operation of the logic element is guaranteed. Output Enable and Disable Time **Output enable time (of a three-state output) to high level, $tpZH$ (or low level, $tpZL$)**† The propagation delay time between the specified reference points on the input and output voltage waveforms with the three-state output changing from a high-impedance (off) state to the defined high (or low) level. **Output enable time (of a three-state output) to high or low level, $tpZX$**† The propagation delay time between the specified reference points on the input and output voltage waveforms with the three-state output changing from a high-impedance (off) state to either of the defined active levels (high or low). **Output disable time (of a three-state output) from high level, $tpHZ$ (or low level, $tpLZ$)**† The propagation delay time between the specified reference points on the input and output voltage waveforms with the three-state output changing from the defined high (or low) level to a high-impedance (off) state. **Output disable time (of a three-state output) from high or low level, $tpXZ$**† The propagation delay time between the specified reference points on the input and output voltage waveforms with the three-state output changing from either of the defined active levels (high or low) to a high-impedance (off) state. Propagation Time **Propagation delay time, $tPD$** The time between the specified reference points on the input and output voltage waveforms with the output changing from one defined level (high or low) to the other defined level. **Propagation delay time, low-to-high-level output, $tPLH$** The time between the specified reference points on the input and output voltage waveforms with the output changing from the defined low level to the defined high level. **Propagation delay time, high-to-low-level output, $tPHL$** The time between the specified reference points on the input and output voltage waveforms with the output changing from the defined high level to the defined low level. †On older data sheets, similar symbols without the P subscript were used; i.e. $tZH$, $tZL$, $tHZ$, and $tLZ$. Pulse Width Pulse width, $t_W$ The time interval between specified reference points on the leading and trailing edges of the pulse waveform. Recovery Time Sense recovery time, $t_{SR}$ The time interval needed to switch a memory from a write mode to a read mode and to obtain valid data signals at the output. Release Time Release time, $t_{release}$ The time interval between the release from a specified input terminal of data intended to be recognized and the occurrence of an active transition at another specified input terminal. Note: When specified, the interval designated "release time" falls within the setup interval and constitutes, in effect, a negative hold time. Setup Time Setup time, $t_{SU}$ The time interval between the application of a signal that is maintained at a specified input terminal and a consecutive active transition at another specified input terminal. NOTES: 1. The setup time is the actual time between two events and may be insufficient to accomplish the setup. A minimum value is specified that is the shortest interval for which correct operation of the logic element is guaranteed. 2. The setup time may have a negative value in which case the minimum limit defines the longest interval (between the active transition and the application of the other signal) for which correct operation of the logic element is guaranteed. Transition Time Transition time, low-to-high-level, $t_{TLH}$ The time between a specified low-level voltage and a specified high-level voltage on a waveform that is changing from the defined low level to the defined high level. Transition time, high-to-low-level, $t_{THL}$ The time between a specified high-level voltage and a specified low-level voltage on a waveform that is changing from the defined high level to the defined low level. GLOSSARY TTL TERMS AND DEFINITIONS Voltage High-level input voltage, $V_{IH}$ An input voltage within the more positive (less negative) of the two ranges of values used to represent the binary variables. NOTE: A minimum is specified that is the least positive value of high-level input voltage for which operation of the logic element within specification limits is guaranteed. High-level output voltage, $V_{OH}$ The voltage at an output terminal with input conditions applied that according to the product specification will establish a high level at the output. Input clamp voltage, $V_{IK}$ An input voltage in a region of relatively low differential resistance that serves to limit the input voltage swing. Low-level input voltage, $V_{IL}$ An input voltage level within the less positive (more negative) of the two ranges of values used to represent the binary variables. NOTE: A maximum is specified that is the most positive value of low-level input voltage for which operation of the logic element within specification limits is guaranteed. Low-level output voltage, $V_{OL}$ The voltage at an output terminal with input conditions applied that according to the product specification will establish a low level at the output. Negative-going threshold voltage, $V_{T-}$ The voltage level at a transition-operated input that causes operation of the logic element according to specification as the input voltage falls from a level above the positive-going threshold voltage, $V_{T+}$. Off-state output voltage, $V_{O(off)}$ The voltage at an output terminal with input conditions applied that according to the product specification will cause the output switching element to be in the off state. Note: This characteristic is usually specified only for outputs not having internal pull-up elements. On-state output voltage, $V_{O(on)}$ The voltage at an output terminal with input conditions applied that according to the product specification will cause the output switching element to be in the on state. Note: This characteristic is usually specified only for outputs not having internal pull-up elements. Positive-going threshold voltage, $V_{T+}$ The voltage level at a transition-operated input that causes operation of the logic element according to specification as the input voltage rises from a level below the negative-going threshold voltage, $V_{T-}$. PART II — CLASSIFICATION OF CIRCUIT COMPLEXITY Gate Equivalent Circuit A basic unit of measure of relative digital-circuit complexity. The number of gate equivalent circuits is that number of individual logic gates that would have to be interconnected to perform the same function. Large-Scale Integration, LSI A concept whereby a complete major subsystem or system function is fabricated as a single microcircuit. In this context a major subsystem or system, whether digital or linear, is considered to be one that contains 100 or more equivalent gates or circuitry of similar complexity. Medium-Scale Integration, MSI A concept whereby a complete subsystem or system function is fabricated as a single microcircuit. The subsystem or system is smaller than for LSI, but whether digital or linear, is considered to be one that contains 12 or more equivalent gates or circuitry of similar complexity. Small-Scale Integration, SSI Integrated circuits of less complexity than medium-scale integration (MSI). Very-Large-Scale Integration, VLSI A concept whereby a complete system function is fabricated as a single microcircuit. In this context, a system, whether digital or linear, is considered to be one that contains 1000 or more gates or circuitry of similar complexity. EXPLANATION OF FUNCTION TABLES The following symbols are now being used in function tables on TI data sheets: \[ \begin{align*} H &= \text{high level (steady state)} \\ L &= \text{low level (steady state)} \\ \uparrow &= \text{transition from low to high level} \\ \downarrow &= \text{transition from high to low level} \\ X &= \text{irrelevant (any input, including transitions)} \\ Z &= \text{off (high-impedance) state of a 3-state output} \\ a..h &= \text{the level of steady-state inputs at inputs A through H respectively} \\ Q_0 &= \text{level of } Q \text{ before the indicated steady-state input conditions were established} \\ \overline{Q}_0 &= \text{complement of } Q_0 \text{ or level of } \overline{Q} \text{ before the indicated steady-state input conditions were established} \\ Q_n &= \text{level of } Q \text{ before the most recent active transition indicated by } \downarrow \text{ or } \uparrow \\ \begin{array}{c} \text{one high-level pulse} \\ \text{one low-level pulse} \end{array} &= \begin{array}{c} \text{one high-level pulse} \\ \text{one low-level pulse} \end{array} \\ \text{TOGGLE} &= \text{each output changes to the complement of its previous level on each active transition indicated by } \downarrow \text{ or } \uparrow. \end{align*} \] If, in the input columns, a row contains only the symbols H, L, and/or X, this means the indicated output is valid whenever the input configuration is achieved and regardless of the sequence in which it is achieved. The output persists so long as the input configuration is maintained. If, in the input columns, a row contains H, L, and/or X together with \(\uparrow\) and/or \(\downarrow\), this means the output is valid whenever the input configuration is achieved but the transition(s) must occur following the achievement of the steady-state levels. If the output is shown as a level (H, L, Q_0, or \(\overline{Q}_0\)), it persists so long as the steady-state input levels and the levels that terminate indicated transitions are maintained. Unless otherwise indicated, input transitions in the opposite direction to those shown have no effect at the output. (If the output is shown as a pulse, \(\begin{array}{c} \text{one high-level pulse} \\ \text{one low-level pulse} \end{array}\), the pulse follows the indicated input transition and persists for an interval dependent on the circuit.) Among the most complex function tables in this book are those of the shift registers. These embody most of the symbols used in any of the function tables, plus more. Below is the function table of a 4-bit bidirectional universal shift register, e.g., type SN74194. | CLEAR | MODE | CLOCK | INPUTS | OUTPUTS | |-------|------|-------|--------|---------| | | | | SERIAL | | | | | | | LEFT | RIGHT | A | B | C | D | Q_A | Q_B | Q_C | Q_D | | L | X | X | X | X | X | X | X | X | L | L | L | L | | H | X | X | L | X | X | X | X | X | Q_A0 | Q_B0 | Q_C0 | Q_D0 | | H | H | H | † | X | X | a | b | c | d | a | b | c | d | | H | L | H | † | X | H | X | X | X | H | Q_An | Q_Bn | Q_Cn | Q_Dn | | H | L | H | † | X | L | X | X | X | X | L | Q_An | Q_Bn | Q_Cn | Q_Dn | | H | H | L | † | H | X | X | X | X | X | Q_Bn | Q_Cn | Q_Dn | H | | H | H | L | † | L | X | X | X | X | X | Q_Bn | Q_Cn | Q_Dn | L | | H | L | L | X | X | X | X | X | X | X | Q_A0 | Q_B0 | Q_C0 | Q_D0 | The first line of the table represents a synchronous clearing of the register and says that if clear is low, all four outputs will be reset low regardless of the other inputs. In the following lines, clear is inactive (high) and so has no effect. The second line shows that so long as the clock input remains low (while clear is high), no other input has any effect and the outputs maintain the levels they assumed before the steady-state combination of clear high and clock low was established. Since on other lines of the table only the rising transition of the clock is shown to be active, the second line implicitly shows that no further change in the outputs will occur while the clock remains high or on the high-to-low transition of the clock. The third line of the table represents synchronous parallel loading of the register and says that if S1 and S0 are both high then, without regard to the serial input, the data entered at A will be at output Q_A, data entered at B will be at Q_B, and so forth, following a low-to-high clock transition. The fourth and fifth lines represent the loading of high- and low-level data, respectively, from the shift-right serial input and the shifting of previously entered data one bit; data previously at Q_A is now at Q_B, the previous levels of Q_B and Q_C are now at Q_C and Q_D respectively, and the data previously at Q_D is no longer in the register. This entry of serial data and shift takes place on the low-to-high transition of the clock when S1 is low and S0 is high and the levels at inputs A through D have no effect. The sixth and seventh lines represent the loading of high- and low-level data, respectively, from the shift-left serial input and the shifting of previously entered data one bit; data previously at Q_B is now at Q_A, the previous levels of Q_C and Q_D are now at Q_B and Q_C, respectively, and the data previously at Q_A is no longer in the register. This entry of serial data and shift takes place on the low-to-high transition of the clock when S1 is high and S0 is low and the levels at inputs A through D have no effect. The last line shows that as long as both mode inputs are low, no other input has any effect and, as in the second line, the outputs maintain the levels they assumed before the steady-state combination of clear high and both mode inputs low was established. SERIES 54/74, 54H/74H, 54S/74S, AND SPECIFIED† SERIES 54L/74L DEVICES PARAMETER MEASUREMENT INFORMATION LOAD CIRCUIT FOR BI-STATE TOTEM-POLE OUTPUTS LOAD CIRCUIT FOR OPEN-COLLECTOR OUTPUTS LOAD CIRCUIT FOR THREE-STATE OUTPUTS NOTES: A. $C_L$ includes probe and jig capacitance. B. All diodes are 1N916 or 1N3064. TIMING INPUT DATA INPUT VOLTAGE WAVEFORMS SETUP AND HOLD TIMES VOLTAGE WAVEFORMS PULSE WIDTHS VOLTAGE WAVEFORMS PROPAGATION DELAY TIMES VOLTAGE WAVEFORMS ENABLE AND DISABLE TIMES, THREE-STATE OUTPUTS NOTES: C. Waveform 1 is for an output with internal conditions such that the output is low except when disabled by the output control. Waveform 2 is for an output with internal conditions such that the output is high except when disabled by the output control. D. In the examples above, the phase relationships between inputs and outputs have been chosen arbitrarily. E. All input pulses are supplied by generators having the following characteristics: $FRR \leq 1$ MHz, $Z_{out} = 50 \Omega$ and: - For Series 54/74 and 54H/74H, $t_r \leq 7$ ns, $t_f \leq 7$ ns; - For Specified† Series 54L/74L devices: $t_r \leq 10$ ns, $t_f \leq 10$ ns; - For Series 54S/74S, $t_r \leq 2.5$ ns, $t_f \leq 2.5$ ns. F. When measuring propagation delay times of 3-state outputs, switches S1 and S2 are closed. † L42, L43, L44, L46, L47, L75, L77, L96, L121, L122, L123, L153, L154, L157, L164 SERIES 54LS/74LS AND MOST† SERIES 54L/74L DEVICES PARAMETER MEASUREMENT INFORMATION NOTES: A. $C_L$ includes probe and jig capacitance. B. All diodes are 1N916 or 1N3064. C. C1 (30 pF) is used for testing Series 54L/74L devices only. VOLTAGE WAVEFORMS SETUP AND HOLD TIMES VOLTAGE WAVEFORMS PULSE WIDTHS VOLTAGE WAVEFORMS PROPAGATION DELAY TIMES VOLTAGE WAVEFORMS ENABLE AND DISABLE TIMES, THREE-STATE OUTPUTS NOTES: D. Waveform 1 is for an output with internal conditions such that the output is low except when disabled by the output control. Waveform 2 is for an output with internal conditions such that the output is high except when disabled by the output control. E. In the examples above, the phase relationships between inputs and outputs have been chosen arbitrarily. F. All input pulses are supplied by generators having the following characteristics: PRR ≤ 1 MHz, $Z_{OUT} \approx 50 \Omega$ and: - For Series 54L/74L gates and inverters, $t_p \leq 60 \text{ ns}$, $t_f \leq 60 \text{ ns}$; - For Series 54L/74L flip-flops and MSI, $t_p \leq 25 \text{ ns}$, $t_f \leq 25 \text{ ns}$; - For Series 54LS/74LS, $t_p \leq 15 \text{ ns}$, $t_f \leq 6 \text{ ns}$. G. When measuring propagation delay times of 3-state outputs, switches S1 and S2 are closed. † Except 'L42, 'L43, 'L44, 'L46, 'L47, 'L75, 'L77, 'L96, 'L121, 'L122, 'L123, 'L153, 'L154, 'L157, 'L164 3
THE CONTROL STRATEGY RESEARCH ON TWO KINDS OF TOPOLOGICAL PULSED POWER SUPPLY Shi Chunfeng #, GUCAS, Beijing, 100049, China; IMP, Lanzhou, China Gao Daqing, Huang Yuzhen, Zhou Zhongzu, Yan Huaihai IMP, Lanzhou, 730000, China Abstract This paper introduces a kind of pulsed power supply at HIRFL-CSR, analyzes the ripple and current error of the quadrupole magnet power supply in the operation process, and gives a two-stage topology of pulsed power supply. The control method is simulinked and the results show that the new one can make up for the deficiencies of the existing pulse power supply and the main circuit structure and control method are feasible. INTRODUCTION HIRFL-CSR consisted of HIRFL (Heavy-Ion Research Facility in Lanzhou) and CSR(Cooling Storage Ring) is the highest energy heavy-ion research facility in our country at present. As the development of the modern accelerators, the quality and stabilization of beam are more important, so it would require that the response of power supply should be faster, tracking error better and current ripple smaller. The magnet power supply system contains: dipole power supply, quadrupole power supply, sextupole power supply, correction power supply and so on. Quadrupole power supply and sextupole power supply have the same topology and control mode. Along with longtime running and aging, the performance parameter such as current ripple and tracking error will affect the further improvement of the beam quality. Based on the above reasons, a two-stage power supply is studied. With the pre-voltage regulated in the front of the H-Bridge chopper [1], the new topology decreases the ripple wave and tracking error. As the Figure 1 shows: the inside of the dashed box is the pre-voltage regulated; the rest is H-Bridge chopper consisted of S1 and S2, filter consisted of R2,C3,C4 [2], and inductive load. H-BRIDGE CHOPPER OPERATIONAL PRINCIPLE As the Figure 1 shows, removing the part inside of the dashed box, the rest is the topology structure of the running power supply on CSR. The circuit adopts PWM(Pulse Width Modulation) control method, and the modulated pulses are produced by comparison between a triangular wave and two error signals. Both of this two error signs come from current PI regulator, same amplitude but opposite sign. When the power supply works, switch S1 and S4 is running, and power supply outputs pulse or DC current. The relation between driving pulse of the two switch is shown in Figure 2. TR1 and TR4 are the PWM control signals of the switch S1 and S4 respectively. The duty ratio of single switch tube is 0~100%[3]. When the ratio is greater than 50%, switch S1 and S4 can have the common conduction time TR(as the Figure 2 shows). The effective duty ratio in one carrier wave period Ts is 2*TR, as the formula (1) shows. \[ 2 \times TR = 2(TR1 - 50\%) = 2(TR2 - 50\%) \] (1) ![Figure 2: driving waves of the H-bridge chopper.](image) NEW TOPOLOGY OPERATIONAL PRINCIPLE The two-stage power supply is consisted of pre-voltage regulating part and H-bridge chopper. The operational principle of the H-bridge chopper is shown above. The control loop of the pre-voltage regulating is a P regulator with a proportional component only, so compared with the current PI regulator of H-bridge chopper, the regulating velocity of the P regulator is faster. Pre-voltage regulating part is controlled by the voltage U2 of the capacitance C2, and the preference voltage(Uref) is obtained by the differential of the given current Iref, as the formula (2) shows. \[ U_{\text{ref}} = 0.5 \left( L \ast \frac{dI_{\text{ref}}}{dt} + R \ast I_{\text{ref}} \right) / TR \] Making difference between \( U_{\text{ref}} \) and \( U_2 \), and then sending into the P regulator, comparison between the output of the P regulator and the triangular wave forms the driving pulse, which controls S3. The control strategy made the voltage wave of capacitance C2 and the load voltage nearly consistent, but faster than the change of load voltage. This kind of the voltage given method, made it possible to get the rated voltage in the energy stored capacitance C2 by the closed-loop regulating on the condition of the fluctuating AC network. **RELATION OF DUTY RATIO AND RIPPLE** Relation between duty ratio D of Buck circuit and ripple wave \( \Delta V \) as formula (3) shows[4]. \[ \Delta V = \frac{U_0(1-D)}{8LCf^2} \] Among which L,C are the follow-current inductance and filter capacitance of Buck circuit respectively, \( U_0 \) is load voltage and f is the frequency of carrier triangular wave. Therefore, the ripple wave of the Buck circuit becomes smaller when the duty ratio increases. As the effective duty ratio TR of the H-bridge chopper shows, the chopping circuit of switch S1 and S4 are equivalent to series of the two Buck circuit. The relation between the effective duty ratio TR and the duty ratio TR1 ,TR2 of single switch are shown in formula (1). Figure 3 shows the pulse wave of current and voltage of the dipole power supply in the HIRFL-CSRm[5]. If this power supply uses the H chopper structure, where the preceding DC-voltage E is unadjustable, then at the arising and the flatting time of the current, theoretical effective duty ratio of the H-bridge chopper 2*TR are 1086/E(maximum) and 587/E respectively. If the maximum duty ratio amplitude limiting of chopper’ single switch is 90%, then from formula (1), the chopper maximum effective duty ratio 2*TR=80%(maximum effective duty ratio of the arising section of the pulse current), then the minimum value of DC voltage(E1) is 1357.5V, so the effective duty ratio of pulse’ flat section is 2*TR=43%, substituting 80% and 43% in formula (3), namely D=2*TR, then the ripple wave of the flat section is 2.85 times of the arising section of the pulse current is obtained by comparison. But if we use the topology structure of the two-stage power supply, making the C2’s voltage changes according to the load voltage by regulating of switch S3, then the pulsed current at the flat section can also keep larger effective duty ratio, thus reducing the ripple wave of the flat section. In accelerator physics, the flat section of the pulsed current is used for acceleration, injection and extraction, therefore, current in flattop is extremely important for the beam quality. ![Figure 3: pulsed wave of dipole power supply in HIRFL-CSRm.](image) **SIMULATION RESULTS OF THE TWO TOPOLOGIES** By use of the simulation software SIMPLORER, the ripple wave and following error of the two topological power supplies are simulated and compared. The parameters of the main circuit are same, DC power supply \( E=200V \), energy storage capacitance \( C1=C2=20000\mu F \), inductive load \( L=120\text{mH} \), \( R=25\text{m}\Omega \), switch tube frequency \( f=10\text{Hz} \), period of the output of pulsed current is 1.2s, the rate of rise and decline of current is 1500A/s, flattop current is 600A. In the case of identical parameters of the main circuit, optimum parameters are obtained by adjusting the regulator of the two control loop respectively. ![Figure 4: output-current and regulator waveforms of the two power supply.](image) As Figure 4 shows, the curves (1)(2) are output-current of the existing pulsed power supply(60 times smaller than reality) and output-signal of the regulator LIM1 respectively, and curves (3)(4) are output-current of the new power supply(60 times smaller than reality) and output-signal of the regulator LIM2 respectively. Relation between output-signal of the regulator and duty ratio of the switch tube TR1, TR4 is as follows: \[ TRx(x=1,4) = \frac{(10+LIM)}{20} \] (4) The arising section of the pulsed current changes a lot from the flattop section in LIM1, while LIM2 changes a little in these sections. The declining section of the current is energy leakage section of the load, which is uncontrollable, so regulator’s signals of LIM change nearly the same. At the flattop of the current $LIM1=1.5$, $LIM2=8.5$, the duty ratio of the switch at the flattop can be obtained by substituting LIM1 and LIM2 into the formula (4), associating with formula (1)(3), the existing ripple wave at the flattop is 5.7 times of the new power supply. Figure 5 is the curve for the following error of the current, which is obtained by scaling down 100 times of the real value. Curve (5) is the following error of the running power supply, and curve (6) is for the new power supply. According to the contrast, the tracking error of the new power supply is better than the existing one at the whole pulsed period, where the numerical value reduced by one times or more and the two spikes appeared at the turning point of the flattop turn out to be one. **CONCLUSION** This paper makes comparisons and analysis between the two types of the topology structure power supply by using performance parameter ripple wave and tracking error from theory and simulation respectively. The result is that the two-stage power supply can cover the shortage of the existing power supply, therefore, knowing well the control method of the new topology structure is necessary. Although the new topology structure and the control method is more complicated than the existing power supply, but it is supported on the bases of the existing power supply technology, at the same time it supports the theory and technology for the updating of the power supply in the HIRFL-CSR and power supply of the heavy ion therapeutic system. ![Figure 5: curve for following error of the current.](image) **REFERENCES** [1] Gao Daqing, Wu Rong, Zhou Zhongzu, et al. Research and Design of HIRFL-CSR Pulsed Switching Power Supply. *Power Electronics*, 2003, 37(2): 15–16 (in Chinese). [2] Zheng Ji. Power Semiconductor DC Stabilized Power Supply [M]. Beijing: Machinery Industry Press, 1984: 96 (in Chinese). [3] Huang Yuzhen, Chen Youxin, Zhou Zhongzu, et al. Research and Design of Digital Power Supply for HIRFL-CSR Sexupole Magnet[J]. *Nuclear Physics Review*, 2011, 28(3): 296 (in Chinese). [4] Zhang ZhanSong. The Principle and Design of Switching Power Supply [M]. Beijing: Publishing House of Electronics Industry, 1998: 13 (in Chinese). [5] Wang Jinjun. Digital Power Supply for Accelerator Researching and Design. Institute of Modern Physics, Chinese Academy of Sciences Thesis for Doctor Degree, 2010 (in Chinese).
Using isotope labeling to partition sources of CO$_2$ efflux in newly established mangrove seedlings Xiaoguang Ouyang, Shing Yip Lee, Rod M. Connolly Australian Rivers Institute – Coast and Estuaries, and School of Environment, Griffith University, Southport, Queensland, Australia Abstract Carbon dioxide (CO$_2$) flux is a critical component of the global C budget. While CO$_2$ flux has been increasingly studied in mangroves, better partitioning of components contributing to the overall flux will be useful in constraining C budgets. Little information is available on how CO$_2$ flux may vary with forest age and conditions. We used a combination of $^{13}$C stable isotope labeling and closed chambers to partition CO$_2$ efflux from the seedlings of the widespread mangrove *Avicennia marina* in laboratory microcosms, with a focus on sediment CO$_2$ efflux in establishing forests. We showed that (1) above-ground part of plants were the chief component of overall CO$_2$ efflux; and (2) the degradation of sediment organic matter was the major component of sediment CO$_2$ efflux, followed by root respiration and litter decomposition, as determined using isotope mixing models. There was a significant relationship between C isotope values of CO$_2$ released at the sediment–air interface and both root respiration and sediment organic matter decomposition. These relative contributions of different components to overall and sediment CO$_2$ efflux can be used in partitioning of the sources of overall respiration and sediment C mineralization in establishing mangroves. Mangroves contain variably thick organic sediments and are the most carbon (C) rich forests (Donato et al. 2011; Sanders et al. 2016). The high C accumulation capacity of mangroves has been recognized, and termed “blue C,” along with saltmarsh and seagrasses (Mcleod et al. 2011; Duarte et al. 2013; Ouyang and Lee 2014). However, studies of mangrove carbon dioxide (CO$_2$) flux vary in the precision of their partitioning. CO$_2$ flux in mangroves may originate from the canopy, woody debris, root, litter and sediment organic matter (SOM), and is collectively called ecosystem respiration ($E_o$), which has been usually studied separately as canopy (above-ground parts, $E_c$) and sediment respiration (the other components, $E_s$). Mangrove organic material such as leaf litter, if not exported, becomes incorporated in the sediment through decay and chemically modified by microbes inhabiting the mangrove forest floor (Kristensen et al. 2008). In contrast to the intensively studied and relatively established pattern of C exchange between mangroves and nearshore ecosystems (Lee 1995), the pattern of C gas flux released from mangrove sediment is less clear, although there is an increasing interest in this topic and C gas flux at the ecosystem scale (Lovelock 2008; Barr et al. 2010; Chen et al. 2010, 2012; Livesley and Andrusiak 2012; Barr 2013; Leopold et al. 2013, 2015, 2016; Bulmer et al. 2015). A key but poorly known aspect is the partitioning of $E_o$ attributable to various components, i.e., root, litter, and SOM (including the microphytobenthos). Laboratory microcosms have been used effectively in studies of mangrove energy pathways. For example, Bui and Lee (2014) evaluated relative contributions of organic matter from mangrove leaf litter and sediment to crab’s diet via laboratory microcosms. Zhu et al. (2014) conducted a microcosm study to investigate the fate of two abundant congeners in polluted mangrove sediment. We use laboratory microcosms to partition different sources of $E_o$, and in particular $E_s$. The microcosms emulate field conditions with seedlings and sediments collected from mangrove forests, and then growing seedlings in the sediments. The study expands the horizon of current studies (e.g., Lovelock et al. 2015), which measure the portions of $E_c$ in mature mangroves and do not completely partition $E_s$. Isotopic ($\delta^{13}$C) values can be used to distinguish photosynthetic pathways, shifts of vegetation and C sources supporting food chains (O’Leary 1981; Ouyang et al. 2015). Further, there is evidence that $\delta^{13}$C values can differ among mangrove tissues, although no consistent patterns of variation has yet been demonstrated (Bouillon et al. 2008a). There is also evidence that the SOM pool in mangroves was consistently enriched in $^{13}$C in relation to the mangrove litter in sites where litter was expected to be the sole input (Lallier-Verges et al. 1998). This difference is likely due to a rise in microbial and fungal residues (Ehleringer et al. 2000). However, the $\delta^{13}$C values of mangrove live tissues and litter are usually not distinguished. Boon et al. (1997) documented that the $\delta^{13}$C values of the pneumatophores of *Avicennia*, a widely distributed species, were on some occasions depleted in $\delta^{13}$C in relation to leaves by up to 3.1‰, while Vane et al. (2013) stated that the difference between leaves and pneumatophores was < 2‰. Rao et al. (1994) noted little difference in $\delta^{13}$C values (< 1‰) between fresh and senescent leaves for five tree species of Kenyan mangroves, but for four other species, senescent leaves were significantly depleted in relation to fresh ones. However, Lee (2000) suggested that the direction and magnitude of this difference was opposite. Natural C isotope signals, therefore, may not be able to differentiate sources from roots and litter, suggesting that isotopic labeling might be preferable. The enriched $^{13}$C isotope technique has been used to identify food sources with similar $^{13}$C signatures in food web research to overcome the drawback of natural $^{13}$C (Lee et al. 2011), and been used in other ecosystems (Galván et al. 2008; Luo and Zhou 2010; Lee et al. 2012; Oakes et al. 2012). Similarly, it may be applicable in partitioning the sources of CO$_2$ flux if combined with the closed chamber technique (Luo and Zhou 2010; Ouyang et al. 2017), which has been used to measure CO$_2$ flux. The microcosms outweigh field experiments, for which it is difficult to perform isotopic enrichment in leaf litter and sediments under field conditions. It is suggested that a relatively low proportion of the organic matter in leaves of *Avicennia* is lost by leaching, while most of the labile portion is present as non-leachable but easily decayed organic material. *Avicennia* leaves tended to be decayed through microbial action relative to crab consumption (Robertson 1988). Although decomposition rates of mangrove litter vary (Lee 1999), much of the important biochemical action occurs relatively quickly, with half-life period of just 10.5 d for *Avicennia* (Sessegolo and Lana 1991). The relatively short half-life period for *Avicennia* has been attributed to lower tannin content and higher initial N concentrations (Alongi 2009). Hence, it takes a short time to investigate the composition of $E_s$, attributable to leaf litter and their incorporated fraction into sediment for *Avicennia*. This study aims to distinguish $E_c$ and $E_s$, and focuses on partitioning $E_s$ attributable to different components using laboratory microcosms. As $E_s$ occurs at the sediment–air interface, tides were not set as a controlling factor in our laboratory microcosms. $^{13}$C enrichment combined with the closed chamber technique was used to partition different sources of CO$_2$ efflux in microcosms with *Avicennia marina* seedlings simulating newly established stands. Our proposed method has the advantage of partitioning $E_s$ without disturbing the sediment compared with directly measuring different components of $E_s$, e.g., the measurement of root respiration from detached roots (Lovelock et al. 2015). **Experimental materials and methods** **Laboratory microcosms** Seeds of *A. marina* (a cryptoviviparous species) and sediments were collected in June 2015 from the mangrove forest on Talibudgera Creek (28°6′22″S, 153°26′49″E) in southeast Queensland, Australia. The developing seedlings comprise cotyledons with fine roots at one side but no branching stems. Ninety healthy seeds were picked and planted in six glass chambers (40 × 30 × 50 cm) containing local sediment of 10 cm depth (*see* Fig. 1) and maintained at 24°C (~ mean local ambient temperature) under fluorescent lighting in a constant temperature room. Another chamber just contained sediment without seedlings, established for the measurement of $E_{SOM}$. From the mangrove forest where the seedlings grew, sediments were collected, mixed and then put in the chambers. The initial volumetric water content of sediment is 32.6% ± 4.1% (mean ± SD), and sediment chlorophyll *a* concentration is 845.7 ± 212.4 μg L$^{-1}$ (mean ± SD). Seawater was collected near the mangrove forest and injected in each chamber in equal quantities every 2 d to keep the sediment moist but not flooded. After injection, water either evaporated, or percolated through the sediment and could be absorbed by the seedlings for growth. After 1 month when leaves grew out of the cotyledons, polypropylene nets (1 cm mesh size) were hung in three of the chambers (over the sediment but under the cotyledons) to collect leaf litter. The netting was not set in the other three chambers. This net design prevented incorporation of leaf litter into the sediment, thus allowing separation of the contribution of leaf litter from $E_s$. When seedlings had 4–6 leaves by August 2015, they were enriched with $^{13}$C using methods modified from Bui and Lee (2014) and Bromand et al. (2001). A bottle containing 25 mL of 1 M NaH$^{13}$CO$_3$ (99 atom% $^{13}$C, Cambridge Isotope Laboratories) was put in each chamber before the chamber lid was tightly sealed. One milliliters of 1 M HCl acid was added to the bottle every 2 d for 45 d through a glass pipette passing through the lid of the chamber to generate $^{13}$CO$_2$ in situ. A small fan ($D = 8$ cm) was turned on for 30 min after the addition of acid to promote even dispersion of $^{13}$CO$_2$ within the growth chamber. At the end of the experiment, the seedlings grew to near the top of the chambers and were ~ 40 cm height and the diameter of stems was 0.5 cm. After sampling at the end of the experiment, the plots were dug up and the roots were found to grow to the bottom of the chambers and some roots continued to extend horizontally in the sediments. ① net, ② fan, ③ NaH$^{13}$CO$_3$ solution, ④ leaf litter, ⑤ HCl solution added through glass pipette ![Experimental setup in the three stages of the experiment.](image) **Fig. 1.** Experimental setup in the three stages of the experiment. **Sample collection and analysis** In October 2015, samples were collected from the chambers to partition $E_e$ and $E_s$, and to partition different sources of $E_s$. A SBA5 gas analyzer (PP system, U.S.A.) was employed to measure CO$_2$ efflux with a rotary pump, allowing air circulation within the closed loop. To begin with, CO$_2$ from the closed chambers was collected by 12 mL borosilicate vacutainers (Labco Limited, UK), followed by CO$_2$ efflux measurement. Likewise, CO$_2$ from two plots (replicates) of sediment in each chamber was collected with 200 mL containers. The containers were inserted into sediment and remained for 10 min before gas collection. $E_s$ from each replicate was measured before a closed container was inserted in the sediment where it remained for 20 min. Then a pooled sample of new live roots from the seedlings was collected and frozen immediately. A pooled sample of leaf litter from the chambers without nets was collected and sealed in 5 mL polystyrene screw-cap vials. Similarly, a pooled sample of the top sediment from 0 cm to 5 cm below the bottom of the litter layer was collected from the chambers. The root samples were dried at 70°C for 48 h and then ground to pass through a 0.86 mm sieve. $\delta^{13}$C value of CO$_2$ were measured by Cavity Ring-Down Spectrometry (DS-CRDS) at James Cook University, Queensland, Australia. The dried litter and sediment samples were individually ground to pass through a 20-mesh sieve for $^{13}$C isotopic analysis. Sediment samples were acidified by 1M HCl until there was no effervescence to remove carbonate matter. The $\delta^{13}$C values of mangrove roots, litter and sediment were analyzed by the Stable Isotope Laboratory, Griffith University. **Methods and principles of CO$_2$ efflux partitioning** $E_e$ is composed of $E_c$ and $E_s$. $E_s$ consists of CO$_2$ efflux from root respiration ($E_r$) and decomposition of litter ($E_l$) and SOM ($E_{SOM}$) (Fig. 2). The closed chamber technique was used to partition $E_c$ and $E_s$, with $E_e = E_c + E_s$. Nets were set in three of the six chambers to collect leaf litter, allowing $\delta^{13}$C to partition sediment CO$_2$ into $E_r$ and $E_{SOM}$. The difference of CO$_2$ efflux from sediments in chambers with and without nets, measured by the closed chamber technique, estimated $E_l$. The labeling experiment exposed the above-ground portion of seedlings to the $^{13}$C-labeled tracer inside the glass chambers. Photosynthesis incorporates $^{13}$C-labeled CO$_2$ into carbohydrate immediately following exposure. The labeled carbohydrate within labile C pools is utilized for respiration over time, assimilated by structural substances of plant tissues via growth, then allocated to the rhizosphere, and transferred to SOM. Samples of mangrove tissues, sediment, and respired CO$_2$ were collected for the analysis of $\delta^{13}$C to trace the fate of labeled C. Relative quantities of $^{13}$C were employed to show partitioning of photosynthetically fixed C into various functional processes on the grounds of the mass conservation principle. $E_e$ and $E_s$ of each chamber were combined to partition $E_c$. Meanwhile, an isotope mixing model was used to estimate average $\delta^{13}$C of $E_c$. $$\delta^{13}C_{en} = f_{sn}\delta^{13}C_{sn} + f_{cn}\delta^{13}C_{cn} \quad (1)$$ $$f_{sn} = \frac{E_{sn}}{E_{en}} \quad (2)$$ $$f_{sn} + f_{cn} = 100 \quad (3)$$ Where $E_{sn}$ and $E_{cn}$ are CO$_2$ efflux from sediment and chambers with nets, $\delta^{13}C_{en}$, $\delta^{13}C_{sn}$, and $\delta^{13}C_{cn}$ are the $\delta^{13}$C values of $E_{en}$, $E_{sn}$, and $E_{cn}$ in chambers with nets, $f_{sn}$ and $f_{cn}$ are the fraction of $E_{sn}$ and $E_{cn}$ contributing to $E_{en}$. Fig. 2. A conceptual diagram describing the components of $E_e$ and $E_s$. The net prevents leaf litter from accumulating on the sediment surface and contributes to efflux in the with-net treatment. $E_e$ – ecosystem respiration, $E_c$ – canopy respiration, $E_s$ – sediment respiration, $E_r$ – CO$_2$ efflux from root respiration, $E_l$ – CO$_2$ efflux from decomposition of litter, $E_{SOM}$ – CO$_2$ efflux from decomposition of SOM. $$\delta^{13}C_e = f_s \delta^{13}C_s + f_c \delta^{13}C_c$$ \hspace{1cm} (4) $$f_s = \frac{E_s}{E_e}$$ \hspace{1cm} (5) $$f_s + f_c = 1$$ \hspace{1cm} (6) Where $\delta^{13}C_e$, $\delta^{13}C_s$, and $\delta^{13}C_c$ are the $\delta^{13}C$ values of $E_e$, $E_s$, and $E_c$ in chambers without nets, $f_s$ and $f_c$ are the fraction of $E_s$ and $E_c$ contributing to $E_e$, $E_s$ and $E_c$ are CO$_2$ efflux from sediment and chambers without nets. The $\delta^{13}C$ of $E_e$, $E_l$, $E_{SOM}$ and $E_s$ were quantified to partition sediment $E_s$ into autotrophic (plant respiration) and heterotrophic (decomposition) sources. The mixing model below was applied to estimate the proportion of $E_r$ vs. $E_s$. $$\delta^{13}C_{sn} = f_m \delta^{13}C_m + f_{SOMn} \delta^{13}C_{SOMn}$$ \hspace{1cm} (7) $$f_m + f_{SOMn} = 1$$ \hspace{1cm} (8) where $\delta^{13}C_{sn}$, $\delta^{13}C_m$, and $\delta^{13}C_{SOMn}$ are the $\delta^{13}C$ values of $E_s$, $E_r$, and $E_{SOM}$ in chambers with nets, $f_m$ and $f_{SOMn}$ are the fraction of $E_r$ and $E_{SOM}$ contributing to $E_s$. $$\delta^{13}C_s = f_t \delta^{13}C_t + f_{SOM} \delta^{13}C_{SOM} + f_l \delta^{13}C_l$$ \hspace{1cm} (9) $$f_t = \frac{E_s - E_{sn}}{E_s}$$ \hspace{1cm} (10) $$f_t + f_{SOM} + f_l = 1$$ \hspace{1cm} (11) where $\delta^{13}C_s$, $\delta^{13}C_t$, $\delta^{13}C_{SOM}$ and $\delta^{13}C_l$ are the $\delta^{13}C$ values of $E_s$, $E_t$, $E_{SOM}$, and $E_l$ in chambers without nets; $f_t$, $f_{SOM}$, and $f_l$ are the fraction of $E_t$, $E_{SOM}$, and $E_l$ contributing to $E_s$. The sampling strategy is described in Fig. 3. The aforementioned mixing models are based on assumptions that $\delta^{13}C$ values of plant canopy, SOM, root, and litter may approximate those of each component of $E_c$ and $E_s$. The assumptions lie in the fact that: (1) there is no C isotopic fractionation during heterotrophic microbial respiration (Lin and Ehleringer 1997); (2) there is little C isotopic fractionation during the early decomposing stage of fallen plant substances (Balesdent et al. 1993; Dehairs et al. 2000). Based on published litter turnover times, we limited the study period to 2–3 months such that only the first litterfall contributes to $E_l$, and afterwards there was little new litter formation. and decay in the chamber; and (3) there was a negligible difference between $\delta^{13}$C values of sediment organic C in the surface layer and that of sediment released CO$_2$, and little difference of $\delta^{13}$C values among different soil size fractions as suggested by Bird et al. (1996). **Data analysis** One-way analysis of variance (ANOVA) was used to examine (1) the difference in $E_e$, $E_s$, and $E_{SOM}$ with or without nets hanging over sediment; and (2) the difference in source contribution to $E_s$ from chambers with nets. Before ANOVA, the assumptions of normality and variance homogeneity were verified by the Shapiro–Wilk normality test and Bartlett test, respectively. Tukey’s HSD test was applied when a significant treatment effect was found. Linear regression analysis was conducted to examine the relationship between $\delta^{13}$C values of $E_s$ and both $E_r$ and $E_{SOM}$. Paired-sample t test was used to compare litter $\delta^{13}$C values from chambers with and without nets. Student’s t-test was performed to compare the contribution of $E_{SOM}$ and $E_r$ to $E_S$ from all the samples. Some previous studies investigated $E_e$ and $E_s$ via CO$_2$ efflux measurement or synthesis of different portions of $E_e$ (Alongi 2009; Lovelock et al. 2015; Troxler et al. 2015). This prior information on the proportion of $E_e$ to $E_s$ was incorporated into a Bayesian framework to estimate the likely range of canopy or sediment contribution to $\delta^{13}$C of $E_e$. Model fitting was undertaken by the Markov chain Monte Carlo (MCMC) method, which generated simulations of plausible values of isotopic source contribution to $E_e$ in consistence with the data. Before running MCMC, $\delta^{13}$C of $E_e$ was assumed to be normally distributed (Moore and Semmens 2008) and verified for the normality assumption. The number of iterations of the Bayesian model was set at 5000. R programming language was used to perform data analysis (R Core Team 2014). R package “SIAR” was applied to conduct Bayesian modeling of uncertainties in isotopic source contribution to total chamber respiration (Parnell and Jackson 2013). Data were expressed as mean ± standard error (SE). **Results** **Carbon dioxide efflux from chambers and sediment** There was a highly significant difference among $E_e$, $E_s$, and $E_{SOM}$ (ANOVA, $p < 0.01$, Fig. 4). Furthermore, $E_e$ (785.0 ± 185.2 (SD) mmol m$^{-2}$ d$^{-1}$ with nets, 1160.8 ± 323.9 mmol m$^{-2}$ d$^{-1}$ without nets) was significantly higher than both $E_s$ (170.1 ± 19.8 mmol m$^{-2}$ d$^{-1}$ with nets, 174.7 ± 22.8 mmol m$^{-2}$ d$^{-1}$ without nets) and $E_{SOM}$ (79.8 ± 17.8 mmol m$^{-2}$ d$^{-1}$) (Tukey’s HSD test, $p < 0.05$). However, there was no significant difference between $E_s$ and $E_{SOM}$ (Tukey’s HSD test, $p > 0.05$). Figure 5 shows the result of Bayesian inference in terms of the posterior distribution of source contribution to $E_e$. Contribution of the two sources had different probability densities; the higher probability density occurs at > 50% contribution from $E_c$, but at < 50% contribution from $E_s$. **The sources of CO$_2$ efflux from the sediment surface** For both chambers with and without nets, $E_{\text{SOM}}$ contributed 61.8% ± 9.2% and was the main component of $E_s$, followed by $E_r$ (31.8% ± 9.7%). The difference between $E_{\text{SOM}}$ and $E_r$ was significant (Student’s $t$-test, $p < 0.05$). $E_l$ contributed the least to $E_s$. For chambers without nets, there was significant difference in the contribution of different components to $E_s$ (ANOVA, $p < 0.05$). In particular, the contribution of $E_{\text{SOM}}$ was significantly higher than that of both $E_r$ and $E_l$ (Tukey’s HSD test, $p < 0.05$) but no significant difference was found between the contribution of $E_r$ and $E_l$ (Tukey’s HSD test, $p > 0.05$) (Fig. 6). Additionally, there was a highly significant relationship between the $\delta^{13}$C values of $E_s$ and root ($R^2 = 0.59$, $p < 0.01$), as well as SOM ($R^2 = 0.62$, $p < 0.01$) (Fig. 7). **Isotopic $^{13}$C values of litter** Table 1 shows the $\delta^{13}$C values of $E_o$, $E_s$ and different components. There was no significant difference in litter $\delta^{13}$C values between chambers with and without nets (paired-sample $t$ test, $p > 0.05$). **Discussion** **Partitioning ecosystem CO$_2$ efflux** $E_s$ contributes a minor proportion (20.9% ± 4.1%) to $E_e$, as is confirmed by the Bayesian inference. This suggests that $E_c$ is the main component of $E_e$, generally in agreement with the global synthesis of mangrove C flow ($E_c : E_s \sim 10 : 1$). Seedlings of *A. marina* were found to have a root/shoot ratio of ~ 0.5 under freshwater treatment (Burchett et al. 1984), which may support a relatively higher contribution from $E_c$ since higher shoot biomass respires more CO$_2$ (i.e., $E_c$) than lower root biomass (i.e., $E_r$). Moreover, part of the decomposed C in sediment may be mineralized as inorganic C (e.g., DIC) in porewater (Maher et al. 2013), and thus gives rise to the difference between $E_c$ and $E_s$; a global synthesis of... mangrove C budget suggested that much of the C sinks in mangroves are still unaccounted for, and dissolved inorganic C in porewater may be a significant contributor to the unaccounted C (Bouillon et al. 2008b). Further, the contribution of $E_s$ reported herein (mean: 20.9%) is about double that in mature forests (~10%) (Alongi 2009). Aboveground biomass of the newly established mangrove seedlings is very low compared to mature forests. The aboveground biomass of *A. marina* seedlings growing for less than 1 yr was found to be rather low (14.4–58.2 g) (Downton 1982). However, for example, the aboveground biomass of mature *A. marina* trees may reach 39.7–557.9 kg (tree diameter at breast height 10–35 cm), estimated from the allometric equation of biomass proposed by Komiyama et al. (2008). The significantly lower aboveground biomass of mangrove seedlings may account for the lower contribution of $E_c$ to $E_e$ and thus higher $E_s$ to $E_v$, in contrast to mature mangroves. **Partitioning sediment CO$_2$ efflux** This study suggests that $E_r$ of young *A. marina* is low compared with $E_{SOM}$. This result is in contrast with the finding that generally $E_r$ was higher than $E_{SOM}$ in mature mangrove forests (Troxler et al. 2015). $E_r$ comprises CO$_2$ respired by roots as well as that released in the process of microbial degradation of roots. The root biomass of *A. marina* seedlings growing for less than 1 yr was found to be rather low (14.8–51.2 g) (Downton 1982). However, for example, the root biomass of mature *A. marina* trees may reach 18.9–82.0 kg (tree diameter at breast height 10–35 cm), estimated from the allometric equation of biomass proposed by --- **Fig. 6.** Relative contributions of different components to $E_s$. Bars with different letters are significantly different. Relative contributions of different components were compared in chambers without nets (lower case letters) and all samples (upper case letters), including chambers with and without nets. **Fig. 7.** Relationship between $\delta^{13}$C values of $E_s$ and *A. marina* roots (a), as well as SOM (b). The regression equation in (a): sediment CO$_2 \log_{10}\delta^{13}$C = 0.43*root respiration $\log_{10}\delta^{13}$C + 2.12 ($R^2 = 0.59$, $p < 0.01$). The regression equation in (b): sediment CO$_2 \log_{10}\delta^{13}$C = 0.62*SOM decomposition $\log_{10}\delta^{13}$C + 2.36 ($R^2 = 0.62$, $p < 0.01$). Table 1. $\delta^{13}$C values of chamber CO$_2$, sediment CO$_2$, SOM decomposition, and root respiration. | Sources | Components | Number of samples | $\delta^{13}$C values ($\%_{\text{voo}}$) | |--------------------------|------------|-------------------|------------------------------------------| | Chamber CO$_2$ | $E_c$ | 6 | 245.3 ± 77.7 | | Sediment CO$_2$ | $E_s$ | 12 | 154.5 ± 30.0 | | SOM decomposition | $E_{SOM}$ | 12 | 59.7 ± 13.4 | | Root respiration | $E_r$ | 12 | 815.7 ± 211.8 | | Litter decomposition | $E_l$ | 12 | 13.9 ± 9.9 | Komiyama et al. (2008). The high biomass of root systems of mature trees definitely respire more CO$_2$ than the less-developed fine roots of establishing seedlings. In addition, mature *A. marina* has pneumatophores and contributes to CO$_2$ flux from the sediment–air interface. Furthermore, the high substrate supply of mature trees provides more energy for microbial communities to decompose roots, in comparison with the low substrate supply of seedlings. This study also highlights the lower contribution of litter (12.8%) relative to roots to $E_s$ in systems dominated by young trees. During the control experiment, litter production was very low, since isotopic fractionation must be minimized during the measurement period and thus the short incubation period did not allow significant accumulated litterfall. The small isotopic fractionation of litter is confirmed by the fact that there is little C isotopic difference of *A. marina* litter that fell on the sediment surface in chambers without nets, compared with litter segregated from sediments in chambers with nets. Therefore, the low litter production leads to low $E_l$, thereby contributing a lower portion than $E_r$ to $E_s$. This is in agreement with published data showing that $E_r$ contributed to approximately half of $E_s$ (Luo and Zhou 2010). In addition, our result implies that the $\delta^{13}$C of $E_s$ (154.5 ± 30.0$\%_{\text{voo}}$) is closely related to $\delta^{13}$C of $E_r$ (815.7 ± 211.8$\%_{\text{voo}}$) and $E_{SOM}$ (59.7 ± 13.4$\%_{\text{voo}}$). In our laboratory microcosms, mangrove seedlings took up enriched $^{13}$C from CO$_2$ generated by the reaction between HCl and NaH$^{13}$CO$_3$. Subsequently, the assimilated $^{13}$C was allocated to roots, the portion exuded by which was subsequently incorporated into SOM. Thus part of $^{13}$C in SOM is derived from the $^{13}$C of roots, explaining the close association between $\delta^{13}$C of $E_s$ and both $E_r$ and $E_{SOM}$. The incorporation of $^{13}$C from roots into the sediment is also mirrored by the highly enriched sediment $\delta^{13}$C in chambers with seedlings, while sediment $\delta^{13}$C values are significantly lower in chambers just containing sediment. This is consistent with earlier findings that mangrove roots can stimulate sediment sulfate reduction via root exudates (Alongi et al. 1998; Kristensen and Alongi 2006). This study has implications for understanding the sources of ecosystem CO$_2$ efflux and CO$_2$ efflux from the sediment–air interface in global mangroves, especially those subjected to restoration after dieback or deforestation. When mature mangroves are replaced by monospecific mangrove plantations, the contributions of $E_s$ to $E_e$ and $E_{SOM}$ to $E_s$ increase while the contributions of $E_c$ to $E_e$ and $E_r$ to $E_s$ decrease in the short term. This study highlights the necessity to construct a temporal trajectory of ecosystem CO$_2$ efflux and CO$_2$ efflux from the sediment–air interface in mangrove ecosystems. Future studies may separate the contribution of microphytobenthos from $E_{SOM}$ to further partition the role of microphytobenthic respiration from sediments (Leopold et al. 2013; Bulmer et al. 2015; Grellier et al. 2017; Ouyang et al. 2017). The developed technique offers a safe and simple alternative to the $^{14}$C isotope and dual stable isotope techniques proposed by Luo and Zhou (2010). It may be applied to the investigation of sources of CO$_2$ efflux from other vegetation and mature forests. References Alongi, D. M. 2009. The energetics of mangrove forests. Springer. Alongi, D. M., A. Sasekumar, F. Tirendi, and P. Dixon. 1998. The influence of stand age on benthic decomposition and recycling of organic matter in managed mangrove forests of Malaysia. J. Exp. Mar. Biol. Ecol. **225**: 197–218. doi: 10.1016/s0022-0981(97)00223-2 Balesdent, J., C. Girardin, and A. Mariotti. 1993. Site-related $\delta^{13}$C of tree leaves and soil organic matter in a temperate forest. Ecology **74**: 1713–1721. doi:10.2307/1939930 Barr, J. G. 2013. Modeling light use efficiency in a subtropical mangrove forest equipped with CO$_2$ eddy covariance. Biogeosciences **10**: 2145–2158. doi:10.5194/bg-10-2145-2013 Barr, J. G., V. Engel, J. D. Fuentes, J. C. Ziemer, T. L. O’Halloran, T. J. Smith, and G. H. Anderson. 2010. Controls on mangrove forest-atmosphere carbon dioxide exchanges in western Everglades National Park. J. Geophys. Res. Biogeosci **115**: G02020. doi:10.1029/2009JG001186 Bird, M. I., A. R. Chivas, and J. Head. 1996. A latitudinal gradient in carbon turnover times in forest soils. Nature **381**: 143–146. doi:10.1038/381143a0 Boon, P. I., F. L. Bird, and S. E. Bunn. 1997. Diet of the intertidal callianassid shrimps *Biffarius arenosus* and *Trypea australiensis* (Decapoda: Thalassinidea) in Western Port (southern Australia), determined with multiple stable-isotope analyses. Mar. Freshw. Res. **48**: 503–511. doi: 10.1071/MF97013 Bouillon, S., R. M. Connolly, and S. Y. Lee. 2008a. Organic matter exchange and cycling in mangrove ecosystems: Recent insights from stable isotope studies. J. Sea Res. **59**: 44–58. doi:10.1016/j.seares.2007.05.001 Bouillon, S., and others. 2008b. Mangrove production and carbon sinks: A revision of global budget estimates. Global Biogeochem. Cycles **22**: GB2013. doi:10.1029/2007GB003052 Bromand, S., J. Whalen, H. H. Janzen, J. Schjoerring, and B. H. Ellert. 2001. A pulse-labelling method to generate $^{13}$C-enriched plant materials. Plant Soil 235: 253–257. doi: 10.1023/A:1011922103323 Bui, T. H. H., and S. Y. Lee. 2014. Does ‘You Are What You Eat’ apply to mangrove grapsid crabs? PLoS ONE 9: e89074. doi:10.1371/journal.pone.0089074 Bulmer, R., C. Lundquist, and L. Schwendenmann. 2015. Sediment properties and CO$_2$ efflux from intact and cleared temperate mangrove forests. Biogeosciences 12: 6169–6180. doi:10.5194/bg-12-6169-2015 Burchett, M., C. Field, and A. Pulkownik. 1984. Salinity, growth and root respiration in the grey mangrove, *Avicennia marina*. Physiol. Plant. 60: 113–118. doi:10.1111/j.1399-3054.1984.tb04549.x Chen, G. C., N. F. Y. Tam, and Y. Ye. 2010. Summer fluxes of atmospheric greenhouse gases N$_2$O, CH$_4$ and CO$_2$ from mangrove soil in South China. Sci. Total Environ. 408: 2761–2767. doi:10.1016/j.scitotenv.2010.03.007 Chen, G. C., N. F. Y. Tam, and Y. Ye. 2012. Spatial and seasonal variations of atmospheric N$_2$O and CO$_2$ fluxes from a subtropical mangrove swamp and their relationships with soil characteristics. Soil Biol. Biochem. 48: 175–181. doi:10.1016/j.soilbio.2012.01.029 Dehairs, F., R. Rao, P. C. Mohan, A. Raman, S. Marguillier, and L. Hellings. 2000. Tracing mangrove carbon in suspended matter and aquatic fauna of the Gautami–Godavari Delta, Bay of Bengal (India). Hydrobiologia 431: 225–241. doi:10.1023/A:1004072310525 Donato, D. C., J. B. Kauffman, D. Murdiyarso, S. Kumianto, M. Stidham, and M. Kanninen. 2011. Mangroves among the most carbon-rich forests in the tropics. Nat. Geosci. 4: 293–297. doi:10.1038/ngeo1123 Downton, W. 1982. Growth and osmotic relations of the mangrove *Avicennia marina*, as influenced by salinity. Funct. Plant Biol. 9: S19–528. doi:10.1071/PP9820519 Duarte, C. M., I. J. Losada, I. E. Hendriks, I. Mazarrasa, and N. Marbà. 2013. The role of coastal plant communities for climate change mitigation and adaptation. Nat. Clim. Chang. 3: 961–968. doi:10.1038/nclimate1970 Ehleringer, J. R., N. Buchmann, and L. B. Flanagan. 2000. Carbon isotope ratios in belowground carbon cycle processes. Ecol. Appl. 10: 412–422. doi:10.2307/2641103 Galván, K., J. W. Fleeger, and B. Fry. 2008. Stable isotope addition reveals dietary importance of phytoplankton and microphytobenthos to saltmarsh infauna. Mar. Ecol. Prog. Ser. 359: 37–49. doi:10.3354/meps07321 Grellier, S., J.-L. Janeau, N. D. Hoai, C. N. T. Kim, Q. L. T. Phuong, T. P. T. Thu, N.-T. Tran-Thi, and C. Marchand. 2017. Changes in soil characteristics and C dynamics after mangrove clearing (Vietnam). Sci. Total Environ. 593: 654–663. doi:10.1016/j.scitotenv.2017.03.204 Komiyama, A., J. E. Ong, and S. Pounparn. 2008. Allometry, biomass, and productivity of mangrove forests: A review. Aquat. Bot. 89: 128–137. doi:10.1016/j.aquabot.2007.12.006 Kristensen, E., and D. M. Alongi. 2006. Control by fiddler crabs (*Uca vocans*) and plant roots (*Avicennia marina*) on carbon, iron, and sulfur biogeochemistry in mangrove sediment. Limnol. Oceanogr. 51: 1557–1571. doi:10.4319/lo.2006.51.4.1557 Kristensen, E., S. Bouillon, T. Dittmar, and C. Marchand. 2008. Organic carbon dynamics in mangrove ecosystems: A review. Aquat. Bot. 89: 201–219. doi:10.1016/j.aquabot.2007.12.005 Lallier-Verges, E., B. P. Perrussel, J.-R. Disnar, and F. Baltzer. 1998. Relationships between environmental conditions and the diagenetic evolution of organic matter derived from higher plants in a modern mangrove swamp system (Guadeloupe, French West Indies). Org. Geochem. 29: 1663–1686. doi:10.1016/S0146-6380(98)00179-X Lee, K. M., S. Y. Lee, and R. M. Connolly. 2011. Combining stable isotope enrichment, compartmental modelling and ecological network analysis for quantitative measurement of food web dynamics. Methods Ecol. Evol. 2: 56–65. doi:10.1111/j.2041-210X.2010.00045.x Lee, K.-M., S. Y. Lee, and R. M. Connolly. 2012. Combining process indices from network analysis with structural population measures to indicate response of estuarine trophodynamics to pulse organic enrichment. Ecol. Indic. 18: 652–658. doi:10.1016/j.ecolind.2012.01.015 Lee, S. Y. 1995. Mangrove outwelling: A review. Hydrobiologia 295: 203–212. doi:10.1007/BF00029127 Lee, S. Y. 1999. The effect of mangrove leaf litter enrichment on macrobenthic colonization of defaunated sandy substrates. Estuar. Coast. Shelf Sci. 49: 703–712. doi:10.1006/ecss.1999.0523 Lee, S. Y. 2000. Carbon dynamics of Deep Bay, eastern Pearl River estuary, China. II: Trophic relationship based on carbon and nitrogen stable isotopes. Mar. Ecol. Prog. Ser. 205: 1–10. doi:10.3354/meps205001 Leopold, A., C. Marchand, J. Deborde, C. Chaduteau, and M. Allenbach. 2013. Influence of mangrove zonation on CO$_2$ fluxes at the sediment–air interface (New Caledonia). Geoderma 202: 62–70. doi:10.1016/j.geoderma.2013.03.008 Leopold, A., C. Marchand, J. Deborde, and M. Allenbach. 2015. Temporal variability of CO$_2$ fluxes at the sediment-air interface in mangroves (New Caledonia). Sci. Total Environ. 502: 617–626. doi:10.1016/j.scitotenv.2014.09.066 Leopold, A., C. Marchand, A. Renchon, J. Deborde, T. Quiniou, and M. Allenbach. 2016. Net ecosystem CO$_2$ exchange in the “Coeur de Voh” mangrove, New Caledonia: Effects of water stress on mangrove productivity in a semi-arid climate. Agric. For. Meteorol. 223: 217–232. doi:10.1016/j.agrformet.2016.04.006 Lin, G., and J. R. Ehleringer. 1997. Carbon isotopic fractionation does not occur during dark respiration in C3 and C4 plants. Plant Physiol. 114: 391–394. doi:10.1104/pp.114.1.391 Livesley, S. J., and S. M. Andrusiak. 2012. Temperate mangrove and salt marsh sediments are a small methane and nitrous oxide source but important carbon store. Estuar. Coast. Shelf Sci. 97: 19–27. doi:10.1016/j.ecss.2011.11.002 Lovelock, C. E. 2008. Soil respiration and belowground carbon allocation in mangrove forests. Ecosystems 11: 342–354. doi:10.1007/s10021-008-9125-4 Lovelock, C. E., L. T. Simpson, L. J. Duckett, and I. C. Feller. 2015. Carbon budgets for Caribbean mangrove forests of varying structure and with phosphorus enrichment. Forests 6: 3528–3546. doi:10.3390/f6103528 Luo, Y., and X. Zhou. 2010. Soil respiration and the environment. Academic press. Maher, D. T., I. R. Santos, L. Golsby-Smith, J. Gleeson, and B. D. Eyre. 2013. Groundwater-derived dissolved inorganic and organic carbon exports from a mangrove tidal creek: The missing mangrove carbon sink? Limnol. Oceanogr. 58: 475–488. doi:10.4319/lo.2013.58.2.0475 Mcleod, E., and others. 2011. A blueprint for blue carbon: Toward an improved understanding of the role of vegetated coastal habitats in sequestering CO$_2$. Front. Ecol. Environ. 9: 552–560. doi:10.1890/110004 Moore, J. W., and B. X. Semmens. 2008. Incorporating uncertainty and prior information into stable isotope mixing models. Ecol. Lett. 11: 470–480. doi:10.1111/j.1461-0248.2008.01163.x O’Leary, M. H. 1981. Carbon isotope fractionation in plants. Phytochemistry 20: 553–567. doi:10.1016/0031-9422(81)85134-5 Oakes, J. M., B. D. Eyre, and J. J. Middelburg. 2012. Transformation and fate of microphytobenthos carbon in subtropical shallow subtidal sands: A $^{13}$C-labeling study. Limnol. Oceanogr. 57: 1846–1856. doi:10.4319/lo.2012.57.6.1846 Ouyang, X., and S. Y. Lee. 2014. Updated estimates of carbon accumulation rates in coastal marsh sediments. Biogeosciences 11: 5057–5071. doi:10.5194/bg-11-5057-2014 Ouyang, X., F. Guo, and H. Bu. 2015. Lipid biomarkers and pertinent indices from aquatic environment record paleoclimate and paleoenvironment changes. Quat. Sci. Rev. 123: 180–192. doi:10.1016/j.quascirev.2015.06.029 Ouyang, X., S. Y. Lee, and M. R. Connolly. 2017. Structural equation modelling reveals factors regulating surface sediment organic carbon content and CO$_2$ efflux in a subtropical mangrove. Sci. Total Environ. 578: 513–522. doi:10.1016/j.scitotenv.2016.10.218 Parnell, A., and A. Jackson. 2013. siar: Stable isotope analysis in R. R package version 4.2. Available from http://CRAN.R-project.org/package=siar R Core Team. 2014. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. Available from http://www.R-project.org/ Rao, R. G., A. F. Woitchik, L. Goeyens, A. Van Riet, J. Kazungu, and F. Dehairs. 1994. Carbon, nitrogen contents and stable isotope abundance in mangrove leaves from an east African coastal lagoon (Kenya). Aquat. Bot. 47: 175–183. doi:10.1016/0304-3770(94)90012-4 Robertson, A. I. 1988. Decomposition of mangrove leaf litter in tropical Australia. J. Exp. Mar. Biol. Ecol. 116: 235–247. doi:10.1016/0022-0981(88)90029-9 Sanders, C. J., D. T. Maher, D. R. Tait, D. Williams, C. Holloway, J. Z. Sippo, and I. R. Santos. 2016. Are global mangrove carbon stocks driven by rainfall? J. Geophys. Res. Biogeosci. 121: 2600–2609. doi:10.1002/2016JG003510 Sessegolo, G., and P. Lana. 1991. Decomposition of Rhizophora mangle, Avicennia schaueriana and Laguncularia racemosa leaves in a mangrove of Paranagua Bay (Southeastern Brazil). Bot. Mar. 34: 285–290. doi:10.1515/botm.1918.104.22.1685 Troxler, T. G., and others. 2015. Component-specific dynamics of riverine mangrove CO$_2$ efflux in the Florida coastal Everglades. Agric. For. Meteorol. 213: 273–282. doi:10.1016/j.agrformet.2014.12.012 Vane, C. H., A. W. Kim, V. Moss-Hayes, C. E. Snape, M. C. Diaz, N. S. Khan, S. E. Engelhart, and B. P. Horton. 2013. Degradation of mangrove tissues by arboreal termites (Nasutitermes acajutlae) and their role in the mangrove C cycle (Puerto Rico): Chemical characterization and organic matter provenance using bulk $\delta^{13}$C, C/N, alkaline CuO oxidation-GC/MS, and solid-state $^{13}$C NMR. Geochim. Geophys. Geosyst. 14: 3176–3191. doi:10.1002/2014ggg20194 Zhu, H., Y. Wang, and N. F. Y. Tam. 2014. Microcosm study on fate of polybrominated diphenyl ethers (PBDEs) in contaminated mangrove sediment. J. Hazard. Mater. 265: 61–68. doi:10.1016/j.jhazmat.2013.11.046 Acknowledgments We thank Dr. Yisheng Peng (Sun Yat-sen University, China) for advice on growing mangrove seedlings, Daniel Tonzing (Griffith University) for assistance in building the chambers, and Niels Munksgaard (James Cook University) for advice on collecting and analysing gaseous isotopes. Two anonymous reviewers are acknowledged for their constructive comments on the initial version. Conflict of Interest None declared.
Isopods Failed to Acclimate Their Thermal Sensitivity of Locomotor Performance during Predictable or Stochastic Cooling Matthew S. Schuler\textsuperscript{1*na}, Brandon S. Cooper\textsuperscript{1mb}, Jonathan J. Storm\textsuperscript{1nc}, Michael W. Sears\textsuperscript{2}, Michael J. Angilletta Jr.\textsuperscript{1nd} \textsuperscript{1} Department of Biology, Indiana State University, Terre Haute, Indiana, United States of America, \textsuperscript{2} Department of Biology Bryn Mawr College, Bryn Mawr, Pennsylvania, United States of America Abstract Most organisms experience environments that vary continuously over time, yet researchers generally study phenotypic responses to abrupt and sustained changes in environmental conditions. Gradual environmental changes, whether predictable or stochastic, might affect organisms differently than do abrupt changes. To explore this possibility, we exposed terrestrial isopods (\textit{Porcellio scaber}) collected from a highly seasonal environment to four thermal treatments: (1) a constant 20°C; (2) a constant 10°C; (3) a steady decline from 20° to 10°C; and (4) a stochastic decline from 20° to 10°C that mimicked natural conditions during autumn. After 45 days, we measured thermal sensitivities of running speed and thermal tolerances (critical thermal maximum and chill-coma recovery time). Contrary to our expectation, thermal treatments did not affect the thermal sensitivity of locomotion; isopods from all treatments ran fastest at 33° to 34°C and achieved more than 80% of their maximal speed over a range of 10° to 11°C. Isopods exposed to a stochastic decline in temperature tolerated cold the best, and isopods exposed to a constant temperature of 20°C tolerated cold the worst. No significant variation in heat tolerance was observed among groups. Therefore, thermal sensitivity and heat tolerance failed to acclimate to any type of thermal change, whereas cold tolerance acclimated more during stochastic change than it did during abrupt change. Citation: Schuler MS, Cooper BS, Storm JJ, Sears MW, Angilletta MJ Jr (2011) Isopods Failed to Acclimate Their Thermal Sensitivity of Locomotor Performance during Predictable or Stochastic Cooling. PLoS ONE 6(6): e20905. doi:10.1371/journal.pone.0020905 Editor: Howard Browman, Institute of Marine Research, Norway Received January 28, 2011; Accepted May 12, 2011; Published June 17, 2011 Copyright: © 2011 Schuler et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: This research was supported by the National Science Foundation (IOS 0616344) and by the College of Graduate Studies at Indiana State University. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing Interests: The authors have declared that no competing interests exist. * E-mail: firstname.lastname@example.org na Current address: Department of Biology, Washington University, St. Louis, Missouri, United States of America mb Current address: Department of Biology, Indiana University, Bloomington, Indiana, United States of America nc Current address: Division of Natural Science and Engineering, University of South Carolina Upstate, Spartanburg, South Carolina, United States of America nd Current address: School of Life Sciences, Arizona State University, Tempe, Arizona, United States of America Introduction Organisms commonly modify their molecular and cellular structures to maintain performance as their environments change [1,2]. Such acclimatory responses have been demonstrated to occur over temporal scales ranging from hours to months [3,4]. For example, fruit flies can alter their thermal tolerance within the course of a single day [5], whereas trees require much longer to alter their photosynthetic rates [6]. When environmental conditions fluctuate slowly, an individual can continuously adjust its phenotype to match prevailing conditions (see [7]). In this way, organisms can tolerate variation in environmental conditions among seasons. Yet, some environments change rapidly and unpredictably, imposing costs for organisms that undergo acclimation [8]. When conditions fluctuate rapidly, the benefit of acclimation during an initial change could be offset by a loss of performance following a reversal [9]. Furthermore, stochastic variation weakens an individual’s ability to anticipate future conditions and adjust its phenotype accordingly. These factors could explain why many organisms fail to acclimate to changes in their environment (reviewed by [10]). Optimality models help researchers to explore how environmental fluctuations affect the evolution of acclimation. Gabriel [11,12,13] modeled reversible acclimation in an environment that switches between two states (e.g., hot and cold), whose conditions were described by a mean and variance. We can use Gabriel’s model to generate hypotheses about thermal acclimation in a seasonal environment. The variance of environmental conditions in the model corresponds to uncertainty about the environmental temperatures during a seasonal shift. Based on this model, the selective pressure for thermal acclimation depends on the difference between seasons and the time lag for acclimation. Relatively large changes in temperature between seasons would select for genotypes with the potential to acclimate. Importantly, Gabriel assumed that the organism receives a reliable cue of environmental change, even though the precise magnitude of change remains unknown. In temperate environments, photoperiodic changes provide reliable cues to seasonal changes in temperature [14,15]. Therefore, organisms from temperate regions should possess a marked capacity for thermal acclimation. We studied the acclimation of thermal physiology in terrestrial isopods (*Porcellio scaber*) from the temperate environment of Terre Haute, Indiana, USA. In this location, isopods experience predictable variation among seasons and stochastic variation among days. In our experiment, we exposed isopods to abrupt, predictable, or stochastic changes in temperature and a predictable change in photoperiod. After this exposure, we compared their thermal sensitivities of running speed and tolerances of extreme temperatures. We expected that isopods would acclimate most readily when thermal cues were predictable. Because all isopods in our experiment came from the same selective environment, we expected variation in thermal physiology among treatment groups to stem primarily from the quality of thermal cues. Isopods exposed to constant and predictably declining temperatures received more reliable cues than did isopods exposed to stochastically declining temperature. Thus, we predicted that thermal optima would vary among groups as follows: constant 20°C > stochastic decline > predictable decline > constant 10°C. **Methods** **Study organism** The terrestrial isopod, *Porcellio scaber*, is widespread throughout Europe and North America, generally occurring within organic debris, leaf litter, and wood mulch. In urbanized areas, isopods are often found in cement cracks or seen moving across cement surfaces. In September of 2007, we collected 280 individuals from a suburban lot in Terre Haute, Indiana, USA. Each animal was weighed and placed in a Petri dish (90×20 mm) containing a thin layer of soil. Isopods were given pieces of carrot and potato twice a week. To prevent isopods from drowning, water was provided in the form of a gel (Cricket Quencher, Fluker Farms, Port Allen, LA). Petri dishes were misted with water 3–4 times a week to maintain a high humidity. **Experimental design** We compared the thermal sensitivities and thermal tolerances among groups of isopods exposed to different thermal treatments for 45 days. Individuals were randomly assigned to either a constant temperature of 20°C, a constant temperature of 10°C, a predictable decline in temperature from 20°C to 10°C, or a stochastic decline in temperature (Figure 1). Our constant thermal treatments approximated the means of the maximal and minimal daily air temperatures during the same period (20°C and 10°C, respectively). The predictable decline in temperature consisted of a daily decrement of 0.2°C d⁻¹ over the 45 days. The stochastic decline in temperature mimicked daily variation in air temperature recorded during October and November at a weather station in Terre Haute (Station 128723 of the National Climate Data Center, USA). These treatments enabled us to infer how isopods respond to different mean temperatures as well as to ecologically relevant declines in temperature. The photoperiod for each treatment shifted gradually from 11.8L:12.2D to 10.4L:13.6D over the course of the experiment. The changes in the light cycle mimicked the natural changes in sunrise and sunset for Terre Haute. Cycles of temperature and light were controlled by a programmable incubator (Model 818, Precision Scientific). Although spatial gradients of temperature within incubators were less than 1°C, Petri dishes were systematically rotated among shelves to eliminate any effect of thermal gradients on acclimation. We recorded the mass of each isopod before and after the thermal treatment. After 45 days of exposure to the thermal treatments, we measured thermal sensitivities of running speed and tolerances of extreme temperatures. These measurements were completed within a period of 5 days. In between measurements, isopods remained in their respective thermal treatments; however, isopods in the declining thermal treatments experienced the same conditions as they did on day 45. **Thermal sensitivity of locomotor performance** We measured the thermal sensitivity of running speed for 25 isopods from each thermal treatment. Speeds were measured on a narrow track (2×30 cm), with a rough surface and smooth walls (1 cm high). This track was kept in an environmental chamber that maintained the desired temperature. Each isopod was raced at six temperatures (8, 13, 20, 28, 32, and 36°C). The order of temperatures was determined randomly to avoid confounding temporal and thermal effects. Isopods were encouraged to run on the track by stroking their pleotelson with a camel-hair brush. Each individual was raced twice at each temperature; the greater speed was analyzed as the maximal performance. Although injuries rarely occurred, any isopod that sustained an injury during one of the trials was removed from the experiment. **Critical thermal maximum** We estimated heat tolerance as the maximal temperature that enabled locomotion, usually referred to as the critical thermal maximum or knockdown temperature [16]. A subset of isopods from each thermal treatment, which were not subjected to previous measures of locomotor performance, were placed individually in small vials (10 mL). These vials were attached to a white sheet of plastic and were submerged in a water bath (Isotemp 228, Fisher Scientific) set at 38.0°C. We increased the temperature of the water by approximately 0.2°C per minute. The temperature was recorded when an isopod ceased to move its legs. At this time, we removed the vials from the bath for a few seconds to confirm the isopod could not respond to stimuli. Critical thermal maxima were measured for eight isopods at a time. Each trial included two isopods from each thermal treatment to avoid confounding effects of time and treatment. **Chill-coma recovery** We estimated cold tolerance as the time required to recover from exposure to 0°C, usually referred to as chill-coma recovery [17]. A subset of isopods from each treatment, which were not subjected to measures of locomotor performance or heat tolerance, were placed in Petri dishes (50×10 mm). These dishes were entombed in ice, causing the air temperature within each dish fell to 0°C within 5 min. After 20 min, the dishes were removed from the ice and the isopods were transferred to sheets of paper at room temperature (21°C). Using a small brush, we positioned each isopod on its back in the center of a printed circle (diameter = 20 mm). We recorded the time between the removal of dishes from the ice and the recovery of each individual using event-recording software [18]. Recovery was scored when an isopod assumed an upright position and broke the plane of the circle; this simple, objective measure of recovery reflected the onset of motor coordination [19]. As each isopod left its circle, we covered it with a small Petri dish to prevent the animal from interfering with others on the same sheet. Because isopods were assayed in successive trials, each trial included individuals from each of the four thermal treatments. Petri dishes containing isopods from different thermal treatments were chilled together, and the positions of these dishes were rotated between trials. To maximize our ability to detect and record recovery, no more than ten isopods were assayed at a time. Statistical analyses We used an information-theoretic approach to evaluate several statistical models of the thermal sensitivities of running speed, typically referred to as performance curves [20]. Specifically, we used Akaike’s information criterion (AIC) to compare the relative fits of five models: quadratic, Gaussian, modified Gaussian, exponentially modified Gaussian, and beta (Table 1). Models were fit to the data using the BFGS method [21] in the R Statistical Package [22]. When fitting the models, critical thermal maxima were used to estimate the upper thermal limits to performance. The model with the lowest value of AIC was used to compare performance curves among groups [23]. To compare thermal optima and performance breadths among groups, we used bootstrapping to generate confidence intervals for these parameters. For each group, data were sampled with replacement from the original set to create a new set with the same number of observations. Nonlinear models were fit to the resulting sets of data, as described above. For the model with the lowest value of AIC, we calculated the thermal optimum and the 80% performance breadth, sensu [24]). Bootstrapping was performed a total of 10,000 times, which enabled us to compute confidence intervals for thermal optima and performance breadths (Table 2). These parameters were regarded as significantly different when no overlap existed between the 84% confidence intervals of the means for two groups, resulting in a Type I error rate of 5% [25]. As with thermal optima, we expected that the time to recover from chill-coma would vary among groups as follows: constant $20^\circ\text{C} >$ stochastic decline $>$ predictable decline $>$ constant $10^\circ\text{C}$. To compare the mean chill-coma recoveries among treatment groups, we used an accelerated failure-time model fit to a Weibull distribution [26]. This model used a chi-square analysis to compare the expected recovery times for each treatment to the observed recovery times. Isopods that did not recover within one hour were censored in the analysis. The model was fit using the survival library of the R Statistical Package [22]. Median values are reported for the chill-coma recovery times, because the data were right-skewed (i.e., most individuals recovered rapidly). Results Thermal sensitivities of running speed did not vary significantly among the four treatment groups (Figure 2). In all cases, a beta function provided the best fit to the data (Table 1). This superior fit likely resulted from the ability of the beta function to accommodate the skewed shapes of performance curves. Bootstrapping yielded very similar estimates of thermal optima and performance breadths for the groups (Table 2). Regardless of their thermal treatment, isopods ran fastest at $33^\circ$ to $34^\circ\text{C}$. Likewise, all four curves were bounded by similar thermal maxima, ranging from $40.4$ to $40.6^\circ\text{C}$ ($F_{3,60} = 0.39$, $P=0.76$; Table 2). Therefore, we failed to find evidence that the thermal sensitivity of running speed had acclimated to either constant or changing temperatures. Some evidence of thermal acclimation was revealed by our comparison of cold tolerances. An accelerated failure-time model indicated that the time required for chill-coma recovery varied significantly among treatment groups ($n=109$, $\chi^2=23.67$, $P<0.001$). However, the rank order of recovery times differed from our hypothesis: constant $20^\circ\text{C} >$ constant $10^\circ\text{C} >$ predictable decline $>$ stochastic decline (Table 2). Thus, isopods exposed to a stochastic decline in temperature tolerated cold the best and those exposed to a constant temperature of $20^\circ\text{C}$ tolerated cold the worst. Discussion We hypothesized that the thermal sensitivity of locomotor performance would change when isopods from a seasonal environment were exposed to naturalistic changes in temperature and photoperiod. Yet, isopods exposed to predictable and stochastic declines in temperature expressed thermal optima and performance breadths that were similar to those of isopods... Table 1. A comparison of plausible models of the relationship between body temperature and running speed in isopods from four thermal treatments. | Treatment | Model | K | AIC | $\Delta_i$ | Relative Likelihood | $w_i$ | |---------------|----------------|---|-------|------------|---------------------|------| | 10°C | Beta | 6 | 152 | 0 | 1.000 | 0.952| | | Gaussian | 4 | 274 | 122 | 3.221·10$^{-27}$ | 3.069·10$^{-27}$| | | Quadratic | 4 | 286 | 134 | 7.985·10$^{-30}$ | 7.606·10$^{-30}$| | | Mod. Gaussian | 5 | 237 | 85 | 3.487·10$^{-19}$ | 3.322·10$^{-19}$| | | Exp. Mod. Gaussian | 6 | 158 | 6 | 0.049 | 0.047| | 20°C | Beta | 6 | 164 | 0 | 1.000 | 0.993| | | Gaussian | 4 | 255 | 91 | 1.736·10$^{-20}$ | 1.725·10$^{-20}$| | | Quadratic | 4 | 249 | 85 | 3.487·10$^{-19}$ | 3.464·10$^{-19}$| | | Mod. Gaussian | 5 | 210 | 46 | 1.026·10$^{-10}$ | 1.019·10$^{-10}$| | | Exp. Mod. Gaussian | 6 | 174 | 10 | 0.006 | 0.006| | Stochastic | Beta | 6 | 273 | 0 | 1.000 | 0.970| | | Gaussian | 4 | 346 | 73 | 1.407·10$^{-16}$ | 1.366·10$^{-16}$| | | Quadratic | 4 | 351 | 78 | 1.155·10$^{-17}$ | 1.121·10$^{-17}$| | | Mod. Gaussian | 5 | 317 | 44 | 2.790·10$^{-10}$ | 2.708·10$^{-10}$| | | Exp. Mod. Gaussian | 6 | 280 | 7 | 0.030 | 0.029| | Predictable | Beta | 6 | 183 | 0 | 1.000 | 0.993| | | Gaussian | 4 | 264 | 81 | 2.577·10$^{-18}$ | 2.560·10$^{-18}$| | | Quadratic | 4 | 261 | 78 | 1.155·10$^{-17}$ | 1.147·10$^{-17}$| | | Mod. Gaussian | 5 | 229 | 46 | 1.026·10$^{-10}$ | 1.019·10$^{-10}$| | | Exp. Mod. Gaussian | 6 | 193 | 10 | 0.006 | 0.006| For all treatments, the beta model provided the best fit to the data. For each model, we report not only the AIC but also the differential AIC ($\Delta_i$), which is the difference between a given model’s AIC and the lowest AIC. We also report the Akaike weight ($w_i$), which is the normalized likelihood that the model is the best one in the set. doi:10.1371/journal.pone.0020905.t001 exposed to a constant temperature of either 10° or 20°C. Moreover, thermal optima were much greater than the mean environmental temperature of any treatment. Similar failures to adjust thermal physiology have been documented for other organisms exposed to changing environments. For example, a closely related species of isopods (*Porcellio laevis*) exhibited no change in the thermal sensitivity of rollover speed when exposed to thermal change [27]. Likewise, Niehaus and colleagues (in review) exposed field crickets to either constant or decreasing temperature, but observed no significant variation in the thermal sensitivities of feeding and locomotion. In contrast to our experiment, these studies did not include a treatment of abrupt thermal change (i.e., multiple constant temperatures). In our experiment, the absence of acclimation was unrelated to the pattern of thermal change (abrupt, gradual, or stochastic); in other words, isopods exposed to constant and fluctuating temperatures had similar thermal sensitivities. Some species do alter their thermal sensitivity of locomotor performance during thermal change. In these cases, individuals usually display increased performance in a novel environment after a period of acclimation [28,29,30,31]. Only rarely, however, does the thermal optimum of performance shift according to the mean environmental temperature. Such was the case in a recent study of the thermal acclimation of swimming speed in crocodiles [32]. Nevertheless, the capacity for thermal acclimation does not seem related to the magnitude and predictability of environmental variation. For example, genotypes from tropical and temperate environments often exhibit similar capacities for acclimation (reviewed by [10]). Furthermore, different species in the same environment exhibit markedly different capacities for acclimation. For example, Antarctic icefish (*Pagothenia borchgrevinkii*) substantially altered their thermal breath of swimming performance when exposed to a warming of 5°C above natural conditions [33], whereas brittle stars (*Ophiomotus victoriae*) were unable to tolerate a warming of 3°C [34]. Similarly, sea stars (*Odontaster validus*) acclimated to 6°C [35], whereas other marine invertebrates from the same environment failed to acclimate to 3°C after two months of exposure [36,37]. Even males and females of the same species differ in their ability to acclimate [38,39]. As with our findings, this variation in the acclimation of thermal sensitivity cannot be explained by the current theory [11]. Variation in thermal tolerance generally makes more sense in light of the current theory [11,13]. Heat and cold tolerances—as estimated by indices such as critical thermal maximum and chill-coma recovery—vary among populations and species along latitudinal clines (reviewed by [10,40]). Studies of acclimation to constant or fluctuating temperatures suggest that natural variation in thermal tolerances partly stems from adaptation to local environments. For example, individuals exposed to high temperatures usually express higher thermal limits than do individuals exposed to low temperatures (e.g., [41]). In our study, the time required to recover from chill coma varied among groups in a way that partially supported our prediction. We expected that isopods that had been exposed to 10°C would recover the fastest, whereas isopods that had been exposed to 20°C would recover the slowest. As predicted, isopods exposed to 20°C took the longest to recover. Table 2. Thermal optima, performance breadths, and critical thermal maxima were similar for all treatment groups, but chill-coma recovery times varied significantly among groups. | Treatment | Thermal optimum (°C) | Performance breadth (°C) | Critical thermal maximum (°C) | Chill-coma recovery (sec) | |----------------------------|----------------------|--------------------------|------------------------------|---------------------------| | Constant 20°C | 32.7 (31.8–34.3) | 10.9 (9.3–13.2) | 40.5 (40.1–40.9) | 171 (113–276) | | Stochastic decline | 34.2 (32.5–35.2) | 10.7 (8.3–12.1) | 40.6 (40.3–40.9) | 112 (101–140) | | Predictable decline | 33.5 (32.1–34.6) | 11.0 (9.1–12.8) | 40.6 (40.2–40.9) | 129 (108–177) | | Constant 10°C | 34.4 (33.6–35.1) | 10.0 (8.5–11.7) | 40.4 (40.1–40.6) | 130 (114–157) | Descriptive statistics are reported as means except for chill-coma recovery times, which are median values. Confidence intervals of the means are given in parentheses; 84% confidence intervals were calculated for means estimated by bootstrapping (thermal optima and performance breadths), and 95% confidence intervals were calculated for other means (critical thermal maxima and chill-coma recovery times). doi:10.1371/journal.pone.0020905.t002 Yet isopods exposed to 10°C did not recover faster than isopods exposed to either predictable or stochastic declines in temperature. Interestingly, this variation in cold tolerance was not associated with variation in heat tolerance, which accords with patterns observed in other species [42,43]. Although few studies have included thermal fluctuations, we can conclude that the acclimation of thermal tolerance does not necessarily depend on the variance of environmental temperature. Support for this idea comes from a recent study of zebrafish (*Danio rerio*); Schaefer and colleagues [44] found that fish exposed to warm conditions, whether constant or fluctuating, had higher critical thermal maxima than did fish exposed to cool conditions. That said, the strength of the interaction between the mean and variance of temperature likely depends on the range of values chosen for these parameters [45,46]. Individuals exposed to high mean temperatures and high variances are most likely to experience selection for heat tolerance, whereas those experiencing low mean temperatures and high variance are most likely to experience selection for cold tolerance. Such interactions would demand the use of realistic thermal fluctuations if biologists wish to draw ecological inferences from laboratory experiments. Unlike most studies of acclimation, our experiment involved a gradual shift in photoperiod in addition to several patterns of thermal change. Gradual changes in photoperiod provide reliable cues about seasonal changes in temperature (reviewed by [14]), and thus should facilitate thermal acclimation. To separate thermal and photoperiodic cues, we exposed all four groups of isopods to the same change in photoperiod while exposing each group to a different change in temperature. Thus, any variation in thermal sensitivity or thermal tolerance among the groups must have been caused by differences in thermal cues. Since we observed no variation in thermal sensitivity among groups, we concluded that changes in temperature did not trigger the acclimation of locomotor performance. However, we cannot know whether the identical shift in photoperiod throughout the experiment caused the thermal sensitivities of isopods in all groups to acclimate similarly. In other words, thermal acclimation of isopods might be triggered completely by photoperiod, a mechanism that could only be detected by comparing groups exposed to different photoperiods. Strong photoperiodic control of thermal acclimation has been observed in some ectotherms, such as fruit flies (*Drosophila spp.*), [5] and rainbow trout (*Oncorhynchus mykiss*) [47]. Interestingly, other studies have documented thermal acclimation under a constant photoperiod [48,49]. If photoperiod controlled thermal acclimation in our experiment, we should still wonder why the thermal optimum of locomotion was much higher than the temperatures experienced by the isopods. Moreover, isopods ran poorly at all temperatures included in our thermal treatments (see Figure 2), suggesting that acclimation of thermal breadth had not occurred either. Perhaps more will be learned by combining realistic thermal and photoperiodic cues when comparing the acclimatory responses of genotypes from different environments. **Acknowledgments** We thank Ben Williams for assistance during the experiment and Diana Hews for the use of equipment. **Author Contributions** Conceived and designed the experiments: MSS JJS MJA. Performed the experiments: MSS BSC JJS MJA. Analyzed the data: MSS BSC MWS MJA. Contributed reagents/materials/analysis tools: MSS MJA. Wrote the paper: MSS BSC JJS MWS MJA. **References** 1. Lagerspetz KYH (2006) What is thermal acclimation? Journal of Thermal Biology 31: 332–336. 2. Prosser CL (1991) Environmental and Metabolic Animal Physiology. New York: Wiley-Liss. 3. Hoffmann AA, Hallas RJ, Dean JA, Schiffer M (2003) Low potential for climatic stress adaptation in a rainforest *Drosophila* species. Science 301: 100–102. 4. Kalberer SR, Wisniewski M, Arora R (2006) Deacclimation and reacclimation of cold-hardy plants: Current understanding and emerging concepts. Plant Science 171: 3–16. 5. Sorensen JG, Loeschcke V (2002) Natural adaptation to environmental stress via physiological clock-regulation of stress resistance in *Drosophila*. Ecology Letters 5: 16–19. 6. Cunningham SC, Read J (2003) Do temperate rainforest trees have a greater ability to acclimate to changing temperatures than tropical rainforest trees? New Phytologist 157: 55–64. 7. Smith EM, Hadley EB (1974) Photosynthetic and respiratory acclimation to temperature in *Lobaria gneussianum* populations. Arctic and Alpine Research 6: 13–27. 8. Huey RB, Berrigan D, Gilchrist GW, Herron JC (1999) Testing the adaptive significance of acclimation: a strong inference approach. American Zoologist 39: 323–336. 9. DeWitt TJ, Sih A, Wilson DS (1998) Costs and limits of phenotypic plasticity. Trends in Ecology & Evolution 13: 77–81. 10. Angilletta MJ (2009) Thermal Adaptation: A Theoretical and Empirical Synthesis, Oxford: Oxford University Press. 11. Gabriel W (1999) Evolution of reversible plastic responses: inducible defenses and environmental tolerance. In: Harvell CD, Tollrian R, eds. The Ecology and Evolution of Inducible Defenses. Princeton: Princeton University Press. pp 286–305. 12. Gabriel W (2005) How stress selects for reversible phenotypic plasticity. Journal of Evolutionary Biology 18: 873–883. 13. Gabriel W (2006) Selective advantage of irreversible and reversible phenotypic plasticity. Archiv Fur Hydrobiologie 167: 1–20. 14. Bokin DB, Saxe H, Araujo MB, Betts R, Bradshaw RHW, et al. (2007) Forecasting the effects of global warming on biodiversity. Bioscience 57: 227–236. 15. Bradshaw WE, Quérédraux MC, Holzapfel CM (2003) Circadian rhythmicity and photoperiodism in the pitcher-plant mosquito: Adaptive response to the photic environment or correlated response to the seasonal environment? American Naturalist 161: 735–748. 16. Cooper BS, Williams BH, Angilletta MJ (2008) Unifying indices of heat tolerance in ectotherms. Journal of Thermal Biology 33: 320–323. 17. Gibert P, Huey RB (2001) Chill-coma temperature in *Drosophila*: Effects of developmental temperature, latitude, and phylogeny. Physiological and Biochemical Zoology 74: 429–434. 18. Shih H-T, Mok H-K (2000) ETHOM: event-recording computer software for the study of animal behavior. Acta Zoologica Taiwanica 11: 47–61. 19. Angilletta MJ, Roth TC, Wilson RS, Nichiais AC, Ribeiro PL (2008) The fast and the frugal: speed and tortuosity trade off in running ants. Functional Ecology 22: 78–82. 20. Angilletta MJ (2006) Estimating and comparing thermal performance curves. Journal of Thermal Biology 31: 541–545. 21. Broyden CG (1970) Convergence of single-rank quasi-newton methods. Mathematics of Computation 24: 365–&. 22. Team RDC (2008) R: A language and environment for statistical computing, 2.1.1 ed. Vienna, Austria: R Foundation for Statistical Computing. 23. Burnham KP, Anderson DR (2002) Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach, New York: Springer. 24. Huey RB, Stevenson RF (1979) Integrating internal physiology and ecology of ectotherm dispersal of apple snails. American Zoologist 19: 357–366. 25. Payton ME, Greenstone MH, Scheiner N (2005) Overlapping confidence intervals or standard error intervals: what do they mean in terms of statistical significance? J Insect Sci 3: 34. 26. Crawley MJ (2007) The R Book; Wiley, 950 p. 27. Folguera G, Bastias DA, Bozinovic F (2009) Impact of experimental thermal amplitude on ectotherm performance: Adaptation to climate change variability? Comparative Biochemistry and Physiology a-Molecular & Integrative Physiology 154: 389–393. 28. O'Steen S, Bennett AF (2008) Thermal acclimation effects differ between voluntary, maximum, and critical swimming velocities in two cyprinid fishes. Physiological and Biochemical Zoology 76: 484–496. 29. Johnson TP, Bennett AF (1995) The thermal acclimation of burst escape performance in fish: an integrated study of molecular and cellular physiology and organismal performance. Journal of Experimental Biology 198: 2165–2175. 30. Day N, Butler PJ (2005) The effects of acclimation to reversed seasonal temperatures on the swimming performance of adult brown trout *Salmo trutta*. Journal of Experimental Biology 208: 2683–2692. 31. Li XC, Wang LZ (2005) Effect of temperature and thermal acclimation on locomotor performance of *Macrobrachium hirsutimanus* Murray (Tardigrada, Macrobrachioidea). Journal of Thermal Biology 30: 588–594. 32. Glaviney EJ, Seebacher F (2006) Compensation for environmental change by complementary shifts of thermal sensitivity and thermoregulatory behaviour in an ectotherm. Journal of Experimental Biology 209: 4869–4877. 33. Seebacher F, Davison W, Lowe CJ, Franklin CE (2005) A falsification of the thermal specialization paradigm: compensation for elevated temperatures in Antarctic fishes. Biology Letters 1: 151–154. 34. Peck LS, Massé C, Thomas MAS, Clark MS (2009) Lack of acclimation in *Ophiacanthus caecus*: brittle stars are not fish. Polar Biology 32: 399–402. 35. Peck LS, Webb KE, Miller A, Clark MS, Hill T (2008) Temperature limits to activity, feeding and metabolism in the Antarctic starfish *Odontaster validus*. Marine Ecology-Progress Series 358: 181–189. 36. Peck LS, Webb KE, Bailey DM (2004) Extreme sensitivity of biological function to temperature in Antarctic marine species. Functional Ecology 18: 625–630. 37. Bailey DM, Johnston IA, Peck LS (2005) Invertebrate muscle performance at high latitude: swimming activity in the Antarctic scallop, *Adassimus colbecki*. Polar Biology 28: 464–469. 38. Wilson RS, Condon CHL, Johnston IA (2007) Consequences of thermal acclimation for the mating behaviour and swimming performance of female mosquitofish. Philosophical Transactions of the Royal Society, in press. 39. Wilson RS (2005) Temperature influences the coercive mating and swimming performance of male eastern mosquitofish. Animal Behaviour 70: 1387–1394. 40. Halfmann AJ, Scott M, Partridge L, Hollis J (2008) Overwintering in *Drosophila melanogaster*: evidence from cage experiments on clinal and laboratory selected populations help to elucidate traits under selection. Journal of Evolutionary Biology 16: 614–623. 41. Matsukura K, Tsunuki H, Uzmi Y, Wada T (2009) Temperature and weather availability affect decrease of cold hardiness in the apple snail, *Pomacea canaliculata*. Malacologia 51: 263–269. 42. Kimura MT (2004) Cold and heat tolerance of drosophilid flies with reference to their latitudinal distributions. Oecologia 140: 442–449. 43. Ragland GJ, Kingsolver JG (2007) Influence of seasonal timing on thermal ecology and thermal reaction norm evolution in *Wyomyia smithii*. Journal of Evolutionary Biology 20: 2144–2153. 44. Schaefer J, Ryan A (2006) Developmental plasticity in the thermal tolerance of zebrafish *Danio rerio*. Journal of Fish Biology 69: 722–734. 45. Angilletta MJ, Wilson RS, Navas CA, James RS (2003) Tradeoffs and the evolution of thermal reaction norms. Trends in Ecology & Evolution 18: 234–240. 46. Ruel JL, Ayres MP (1999) Jensen's inequality predicts effects of environmental variation. Trends in Ecology & Evolution 14: 361–366. 47. Martin N, Krafft E, Guderley H (2009) Effect of day length on oxidative capacities of mitochondria from red muscle of rainbow trout (*Oncorhynchus mykiss*). Comparative Biochemistry and Physiology a-Molecular & Integrative Physiology 152: 599–603. 48. Nunney L, Cheung W (1997) The effect of temperature on body size and fecundity in female *Drosophila melanogaster*: Evidence for adaptive plasticity. Evolution 51: 1529–1535. 49. Geister TL, Fischer K (2007) Testing the beneficial acclimation hypothesis: temperature effects on mating success in a butterfly. Behavioral Ecology 18: 658–664.
an opportunity has been missed. The Tribunal has taken a remarkable action by affirming its advisory jurisdiction on the basis of unpersuasive reasoning. Yet it could have demonstrated imagination and established a coherent system guaranteeing the rights of members of the international community in judicial proceedings.\(^1\) The advisory jurisdiction of the Tribunal is a potential avenue that practitioners (including those working for the Authority and other international organizations) may wish to consider when advising clients. In doing so, they need to be able to understand the scope of the Tribunal’s jurisdiction to give advisory opinions, the limitations thereon, and applicable procedures. The Tribunal has two distinct advisory jurisdictions: first, the advisory jurisdiction of the Tribunal’s Seabed Disputes Chamber, which is expressly provided for in the Convention; and second, the advisory jurisdiction of the full Tribunal, which, it has been said, “was essentially created out of the blue by the Tribunal itself through the introduction of Article 138 of the Rules, 15 years after the signing ceremony in Montego Bay.”\(^2\) In considering the issues from the point of view of the practitioner, I shall focus on the latter, that is on advisory opinions which may be given by the full Tribunal, under article 21 of its Statute combined with other agreements. The two Advisory Opinions given so far by the Tribunal relate in turn to each of these two distinct heads of jurisdiction: *Responsibilities and obligations of States sponsoring persons and entities with respect to activities in the Area* (*Request for Advisory Opinion submitted to the Seabed Disputes Chamber*),\(^3\) --- \(^1\) *Request for Advisory Opinion submitted by the Sub-Regional Fisheries Commission*, *Advisory Opinion*, 2 April 2015, *Declaration of Judge Cot*, *ITLOS Reports 2015*, p. 4, at p. 75, para. 13. \(^2\) T. Ruys & A. Soete, “‘Creeping’ Advisory Jurisdiction of International Courts and Tribunals? The case of the International Tribunal for the Law of the Sea”, *29 Leiden Journal of International Law* 1 (2016) pp. 155–176, at p. 173. \(^3\) *Responsibilities and obligations of States with respect to activities in the Area*, *Advisory Opinion*, 1 February 2011, *ITLOS Reports 2011*, p. 10 (hereafter “*Responsibilities and obligations of States Opinion*”). and *Request for an Advisory Opinion submitted by the Sub-Regional Fisheries Commission (SRFC).*\(^4\) They offer only limited guidance as regards the advisory jurisdiction of the full Tribunal. The Secretary-General of the United Nations has written, of advisory proceedings in general, that they “carry great weight and moral authority, often serving as an instrument of preventive diplomacy and contributing to the clarification of the state of international law.”\(^5\) Advisory opinions, as such, have no binding force, though they may be binding under a separate agreement. But in any event, they carry considerable authority; they most certainly have legal effects. Advisory opinions undoubtedly have the potential to contribute to the rule of law. Their role in the settlement of disputes may be indirect,\(^6\) yet by clarifying the law they promote legal certainty, an important aspect of the rule of law. There remain real concerns about how appropriate advisory proceedings may be in some circumstances. Distinguished authors have referred to the “current health”\(^7\) or “uses and abuses”\(^8\) of advisory opinions. The report of the 1943/44 Informal Inter-Allied Committee that considered the establishment of the ICJ (the London Committee) makes interesting reading.\(^9\) Some members of the Committee: --- \(^4\) *Request for Advisory Opinion submitted by the Sub-Regional Fisheries Commission, Advisory Opinion, 2 April 2015, ITLOS Reports 2015*, p. 4 (hereafter “*SRFC Opinion*”). \(^5\) “Strengthening and coordinating United Nations rule of law activities” Report of the Secretary-General, 20 August 2010, UN Doc. A/65/318, para. 25. \(^6\) As the ICJ has said, “[t]he purpose of the advisory function is not to settle – at least directly – disputes between States, but to offer legal advice to the organs and institutions requesting the opinion”, *Legality of the Threat or Use of Nuclear Weapons, Advisory Opinion, I.C.J. Reports 1996*, p. 226, at p. 236, para. 15. \(^7\) R. Higgins, “A Comment on the Current Health of Advisory Proceedings”, in V. Lowe & M. Fitzmaurice (eds.), *Fifty Years of the International Court of Justice: Essays in Honour of Sir Robert Jennings* (Cambridge University Press, 1996), pp. 567–581, reprinted in R. Higgins, *Themes and Theories. Selected Essays, Speeches, and Writings in International Law* (Oxford University Press, 2009), pp. 1043–1055. \(^8\) F. Berman, “The Uses and Abuses of Advisory Opinions”, in N. Ando et al. (eds.), *Liber Amicorum Judge Shigeru Oda* (2002), pp. 809–828. See also A. Aust, “Advisory Opinions”, 1 *Journal of International Dispute Settlement* (2010), pp. 123–151. \(^9\) Report of the Informal Inter-Allied Committee on the Future of the Permanent Court of International Justice, February 10, 1944, 39 *AJIL* 1 (1945) Supplement, pp. 1–42. were inclined to think at first that the Court’s jurisdiction to give advisory opinions was anomalous and ought to be abolished, mainly on the ground that it was incompatible with the true function of a court of law, which was to hear and decide disputes. It was urged that the existence of this jurisdiction tended to encourage the use of the Court as an instrument for settling issues which were essentially of a political rather than of a legal character and that this was undesirable. Attention was drawn to instances of this which had occurred in the past. Subsidiary objections were that the existence of this jurisdiction might promote a tendency to avoid the final settlement of disputes by seeking opinions, and might lead to general pronouncements of law by the Court not (or not sufficiently) related to a particular issue or set of facts.\footnote{Ibid., p. 20, para 65.} However, the Committee also saw “no objection to allowing two or more States, acting in concert, to apply direct to the Court for an advisory opinion”\footnote{Ibid., p. 22, para 71.} and stated: [w]e are also agreed that, provided the necessary safeguards can be instituted, there would, for the reasons given in paragraph 68, be considerable advantage in permitting references on the part of two or more States acting in concert. Applications by an individual State ex parte could not be permitted, for, given the authoritative nature of the Court’s pronouncements, ex parte applications would afford a means whereby the State concerned could indirectly impose a species of compulsory jurisdiction on the rest of the world. In addition, the Court must have an agreed basis of fact on which to give its opinion.\footnote{Ibid., p. 23, para 74. This suggestion was not accepted at San Francisco.} The procedure for advisory proceedings raises serious questions. On one view, the fact that their purpose is to advise, not to decide a dispute, and the frequent absence of disputed facts, mean that the court or tribunal is able to be more ambitious in its abstract statements of the law, which might be less appropriate in a contentious case; yet that is not the real purpose of advisory opinions. Another view is that the absence of a dispute rooted in tested facts, and the absence of genuine adversarial pleadings, reduces the ability of the court or tribunal to give a well-considered and focused view of the law. This can to some extent be mitigated through procedural arrangements, as was done in *Accordance with International Law of the Unilateral Declaration of Independence in Respect of Kosovo*.\(^{13}\) Requests for advisory opinions often seem abstract; the court or tribunal does not have the benefit of seeing a legal question through the prism of concrete facts. Or, conversely, in some cases where there are real facts before the Tribunal, the advisory procedure may not allow their proper consideration, or be appropriate for addressing any underlying dispute. Then there is the inappropriateness, to put it no higher, of addressing a dispute between States without each party’s consent. At the very least, these matters require careful attention to the procedure in advisory proceedings, and a readiness to tailor it to the particular circumstances of the case. For example, there is the almost random order of presentation at the oral hearing, dictated by the alphabet, often the French alphabet. In the *Responsibilities and obligations of States* case this meant that the main protagonists came way down the batting order. There is, unfortunately, not much guidance on the Tribunal’s advisory jurisdiction. There are the provisions of the Convention: articles 159, paragraph 10, 191 and, it now appears, Annex VI, article 21, which reads: “[t]he jurisdiction of the Tribunal comprises all disputes and all applications submitted to it in accordance with this Convention and all matters (“toutes les fois que cela” in French) specifically provided for in any other agreement which confers jurisdiction on the Tribunal.” Article 138 of the Rules of the Tribunal provides as follows: 1. The Tribunal may give an advisory opinion on a legal question if an international agreement related to the purposes of the Convention specifically provides for the submission to the Tribunal of a request for such an opinion. 2. A request for an advisory opinion shall be transmitted to the Tribunal by whatever body is authorized by or in accordance with the agreement to make the request to the Tribunal. 3. The Tribunal shall apply mutatis mutandis articles 130 to 137.\(^{14}\) \(^{13}\) *Accordance with International Law of the Unilateral Declaration of Independence in Respect of Kosovo*, Advisory Opinion, I.C.J. Reports 2010, p. 403. In Kosovo there were two rounds of written pleadings, and the main protagonists – Serbia and Kosovo – addressed the Court at the outset of the oral hearing, and were given significantly more speaking time. \(^{14}\) Judge Cot seemed to regard the adoption of this rule, and the absence of objection thereto by States, as conclusive for the jurisdiction of the Tribunal to give advisory opinions (*Declaration of Judge Cot*, ITLOS Reports 2015, p. 73, para. 4). That rather overlooks diplomatic and legal realities. States may well not react to such a rule in the abstract, on the The reasons for including article 138 in the Rules adopted in 1997 are unknown, and its legal basis was unspecified. It might simply have been that the judges wanted to expand the work of the Tribunal at a time when there seemed little immediate prospect of contentious cases, or even just to make the Tribunal more competitive with the ICJ, which had a relatively thriving advisory practice. Regard may be had, within limits, to the practice of other international courts and tribunals. But caution is required. Each court and tribunal and its advisory jurisdiction (if any) is distinct, with its own context, characteristics and statutory provisions. Even the Tribunal’s Seabed Disputes Chamber and the full Tribunal are very different. It cannot simply be assumed that the case-law and experience of, say, the ICJ can be transposed to the Tribunal. There are also extensive writings on advisory proceedings, though again this mostly concerns courts other than the Tribunal and needs to be treated with prudence. The full Tribunal’s power to give advisory opinions remains controversial. Faced with the Tribunal’s *SRFC Opinion*, “it is difficult to suppress a feeling of unease” and it is hard not to share the view that the Advisory Opinion, and particularly its paragraph 56, is “not fully convincing”. The issue eventually turned on the interpretation of article 21 of the Statute. Paragraph 56 of the Advisory Opinion reads: assumption that they can question its validity if and when it is sought to be applied in practice. There is an almost complete lack of transparency in the adoption of the Rules of the Tribunal, which may be regretted. The same is true of the Rules of the ICJ (by contrast with the Rules of the PCIJ). As was hinted by Judge Wolfrum at a conference in 2010: R. Wolfrum, “Final Remarks and Conclusions” in R. Wolfrum & I. Gätzschmann (eds.), *International Dispute Settlement: Room for Innovations?* (Springer, 2013), p. 445. Even within the EU: In Case C-73/14, the CJEU (Grand Chamber) noted that the neutral position expressed in the European Commission’s Written Statement to the Tribunal concerning the issue of the Tribunal’s jurisdiction to give the Advisory Opinion sought in Case No. 21 “was dictated by its concern to take into account, in the spirit of sincere cooperation, the divergent views on that issue expressed by the Member States within the Council.” Judgment of Grand Chamber of 6 October 2015 – *Council of the European Union v. European Commission*, para. 88. T Ruys & A. Soete, *supra* at note 2, at p. 162. M. Lando, “The Advisory Jurisdiction of the International Tribunal for the Law of the Sea: Comments on the Request for Advisory Opinion submitted by the Sub-Regional Fisheries Commission”, 29 *Leiden Journal of International Law* 2 (2016), pp. 441–461, at p. 442. The words all “matters” (“toutes les fois que cela” in French) should not be interpreted as covering only “disputes”, for, if that were to be the case, article 21 of the Statute would simply have used the word “disputes”. Consequently, it must mean something more than only “disputes”. That something more must include advisory opinions, if specifically provided for in “any other agreement which confers jurisdiction on the Tribunal”. It is difficult not to agree with Judge Cot’s view of the weakness of the Tribunal’s “convoluted reasoning”. The Tribunal’s affirmation of its advisory jurisdiction has been described as “regrettably succinct”. Nevertheless, on jurisdiction the Opinion was unanimous, and it would be a brave advocate who sought to persuade the Tribunal to change its mind. The Tribunal offered little guidance on the circumstances in which it would be prepared to assert a jurisdiction to give advisory opinions, though it was encouraged to do so by many of those participating in the proceedings. It said: In terms of article 21 of the Statute, it is the “other agreement” which confers such jurisdiction on the Tribunal. When the “other agreement” confers advisory jurisdiction on the Tribunal, the Tribunal then is rendered competent to exercise such jurisdiction with regard to “all matters” specifically provided for in the “other agreement”. Article 21 and the “other agreement” conferring jurisdiction on the Tribunal are interconnected --- 20 Declaration of Judge Cot, ITLOS Reports 2015, p. 73, para. 2. As Judge Cot explained at para. 3: “The Tribunal considers its advisory jurisdiction to be founded on the combined provisions of an international agreement, the MCA Convention, and article 21 of its Statute. In my view this interpretation is misguided, as it is contrary to the rules codified in the 1969 Vienna Convention on the Law of Treaties. It presupposes that there is a plain meaning which can be ascribed to the article and that the term ‘matters’ is more precise than it actually is. Quite a number of States participating in the proceedings skilfully advocated an opposite and equally plausible interpretation. The ambiguity of the provision is blindingly obvious. Reference should have been made to the travaux préparatoires for the Convention, which in no way confirm the interpretation adopted by the Tribunal. I would add that that interpretation does not allow the different language versions to be reconciled. The French version does not refer to ‘matters’ and does not translate that term by ‘matières’, which would have been the case had the Convention drafters intended to confer upon the term the special meaning encompassing a reference to advisory jurisdiction.” However, Judge Cot’s own reasons for accepting the Tribunal’s jurisdiction are if anything even less convincing (Declaration of Judge Cot, ITLOS Reports 2015, p. 73, para. 4). 21 T. Ruys & A. Soete, supra at note 2, p. 173. and constitute the substantive legal basis of the advisory jurisdiction of the Tribunal.\textsuperscript{22} Yet it is important for practitioners to be able to understand the scope of and “prerequisites” for the jurisdiction of the full Tribunal referred to in article 138. The Tribunal has only set out these prerequisites in the most formal terms, in a single paragraph (para. 60) of its 2015 Advisory Opinion: These prerequisites are: an international agreement related to the purposes of the Convention specifically provides for the submission to the Tribunal of a request for an advisory opinion; the request must be transmitted to the Tribunal by a body authorized by or in accordance with the agreement mentioned above; and such an opinion may be given on “a legal question”. It would have been helpful if in its Opinion the Tribunal had given more explanation of these prerequisites and the limits of its advisory jurisdiction. These matters were discussed extensively in many of the written and oral pleadings in the case. Judge Cot’s words of caution are well founded: The dangers of abuse and manipulation, if the Tribunal does not provide a procedural framework by exercising its discretionary power, are evident. States could, through bilateral or multilateral agreement, seek to gain an advantage over third States and thereby place the Tribunal in an awkward position.\textsuperscript{23} And then there is the question of discretion. “The Tribunal \textit{may} give an advisory opinion”, as article 138 of the Rules says. In the 2014 Advisory Opinion, the Tribunal “[took] refuge behind the jurisprudence of the International Court of Justice and [stated] that it is well settled that a request for an advisory opinion should not in principle be refused except for ‘compelling reasons’ (para. 71).”\textsuperscript{24} Yet it is not obvious that the approach of the ICJ would be appropriate, given the great differences between the ICJ and the Tribunal.\textsuperscript{25} \textsuperscript{22} \textit{SRFC Opinion}, para. 58. \textsuperscript{23} Declaration of Judge Cot, \textit{ITLOS Reports 2015}, p. 74, para. 9. \textsuperscript{24} \textit{Ibid.}, para. 5. \textsuperscript{25} \textit{Ibid.}, para. 7: “The Tribunal’s position in advisory proceedings is very different from that of the Court. The advisory procedure in the International Court of Justice is governed by a tight framework. An opinion may be requested only by the General Assembly or the I have tried to highlight the potential importance of the Tribunal’s advisory jurisdiction, and some possible difficulties. As you may have gathered, and as I have indicated in the *Festschrift Wolfrum*,\textsuperscript{26} I am in two minds about the value of advisory opinions. At the very least, they need to be approached with “prudence and caution”. \begin{quote} Security Council or with their authorization. The request is the subject of a preliminary discussion within a body in which all interested parties are represented. Each State concerned is thus involved in drafting the questions asked.” \end{quote} \textsuperscript{26} M. Wood, “Advisory Jurisdiction: Lessons from Recent Practice” in Hestermeier et al. (eds.), \textit{Coexistence, Cooperation and Solidarity. Liber Amicorum Rüdiger Wolfrum} (Brill, 2012), pp. 1833–1849.
May 17, 2016 Honorable Judith T. Won Pat, Ed.D Speaker I Mina’irentai Tres Na Liheslaturan Guahan 155 Hessler Street Hagåtña, Guam 96910 Dear Madame Speaker: Attached is Bill No. 2-33 (COR) “An act to add a New Article 5 to Chapter 8, Title 4, Guam Code Annotated Relative to Creating a New ‘Defined Benefit 1.75’ Retirement System; and to creating a new Cash Balance Plan as alternatives to the defined contribution retirement system,” which I have VETOED. I vetoed this bill because we lack the information needed to ensure the financial future of our government, government employees and retirees are protected. In short, we do not know if we can afford it. There is one thing that is absolutely clear when you look at Bill 2-33, it does not provide the whole picture. And when you are talking about the possibility of bankrupting the Government of Guam by increasing the unfunded liabilities of the Retirement Fund by $140 million, half of a picture isn’t good enough. We found ourselves in this situation in 1995. We certainly cannot afford to make this mistake again — even though we closed the Defined Benefits plan we are still paying for it today. But that’s what we were given with this bill – a half picture. Further, the actuarial report that is supposed to support it is no longer relevant because of the amendments the Legislature made to the bill. And this is why the Department of Administration wants a second actuary report. The Department of Administration has received a response to the request for proposal. Also, to address Vice Speaker Cruz’s concern, they have expanded their search to professional publications. This study could provide the answers we need. In addition, their findings could help us create a retirement plan that addresses the needs of our retirees in a responsible manner and that does not burden our tax payers and put our government’s financial health at risk. We owe it to our taxpayers, government employees, and retirees to ensure that a new retirement plan is deliberate, fiscally prudent, and takes the entire financial impact into account. Respectfully, EDDIE BAZA CALVO Governor of Guam CERTIFICATION OF PASSAGE OF AN ACT TO I MAGA'LÅHEN GUÅHAN This is to certify that Substitute Bill No. 2-33 (I.S), "AN ACT TO ADD A NEW ARTICLE 5 TO CHAPTER 8, TITLE 4, GUAM CODE ANNOTATED RELATIVE TO CREATING A NEW "DEFINED BENEFIT 1.75" RETIREMENT SYSTEM; AND TO CREATING A NEW CASH BALANCE PLAN ("GUAM RETIREMENT SECURITY PLAN") AS ALTERNATIVES TO THE DEFINED CONTRIBUTION RETIREMENT SYSTEM UPON TIMELY ELECTION IN ACCORDANCE WITH REGULATIONS TO BE PROMULGATED; TO AMEND §§ 8208 AND 8209(a) OF ARTICLE 2, CHAPTER 8, TITLE 4, GUAM CODE ANNOTATED, RELATIVE TO INCREASING THE DEFINED CONTRIBUTION RETIREMENT SYSTEM MEMBER AND EMPLOYER CONTRIBUTIONS TO SIX AND TWO TENTHS PERCENT (6.2%); AND TO AMEND § 8137(B), ARTICLE 1, CHAPTER 8, TITLE 4, GUAM CODE ANNOTATED, RELATIVE TO EXTENDING THE AMORTIZATION PERIOD OF THE UNFUNDED LIABILITY FOR PRIOR SERVICE," was on the 3rd day of May 2016, duly and regularly passed. Judith T. Won Pat, Ed.D. Speaker Attested: Tina Rose Muña Barnes Legislative Secretary This Act was received by I Maga'låhen Guåhan this 5th day of May, 2016, at 4:46 o'clock P.M. APPROVED: EDWARD J.B. CALVO I Maga'låhen Guåhan Date: MAY 17 2016 Public Law No. Bill No. 2-33 (LS) As substituted by the Committee on Appropriations and Adjudication; amended in the Committee of the Whole; and further amended on the Floor. Introduced by: B. J. F. Cruz Michael F.Q. San Nicolas T. C. Ada V. Anthony Ada FRANK B. AGUON, JR. Frank F. Blas, Jr. James V. Espaldon Brant T. McCreadie Tommy Morrison T. R. Muña Barnes R. J. Respicio Dennis G. Rodriguez, Jr. Mary Camacho Torres N. B. Underwood, Ph.D. Judith T. Won Pat, Ed.D. AN ACT TO ADD A NEW ARTICLE 5 TO CHAPTER 8, TITLE 4, GUAM CODE ANNOTATED RELATIVE TO CREATING A NEW “DEFINED BENEFIT 1.75” RETIREMENT SYSTEM; AND TO CREATING A NEW CASH BALANCE PLAN (“GUAM RETIREMENT SECURITY PLAN”) AS ALTERNATIVES TO THE DEFINED CONTRIBUTION RETIREMENT SYSTEM UPON TIMELY ELECTION IN ACCORDANCE WITH REGULATIONS TO BE PROMULGATED; TO AMEND §§ 8208 AND 8209(a) OF ARTICLE 2, CHAPTER 8, TITLE 4, GUAM CODE ANNOTATED, RELATIVE TO INCREASING THE DEFINED CONTRIBUTION RETIREMENT SYSTEM MEMBER AND EMPLOYER CONTRIBUTIONS TO SIX AND TWO TENTHS PERCENT (6.2%); AND TO AMEND § BE IT ENACTED BY THE PEOPLE OF GUAM: Section 1. Legislative Findings and Intent. I Mina'Trentai Tres Na Liheslaturan Guåhan finds that there are three (3) separate retirement plans generally available to government of Guam employees: (a) employees employed on or before September 30, 1995 were required to become members of the Retirement Fund (Defined Benefit Plan) under Article 1, Chapter 8, Title 4 of the Guam Code Annotated; (b) employees employed after September 30, 1995 were and remain required to become participants in the Defined Contribution Retirement System under Article 2, Chapter 8, Title 4 of the Guam Code Annotated; and (c) all employees, except those participating in a government of Guam sponsored plan under Section 403(b) of the Internal Revenue Code, may voluntarily participate in the Deferred Compensation Program under Article 3, Chapter 8, Title 4 of the Guam Code Annotated. I Mina'Trentai Tres Na Liheslaturan Guåhan further finds that the Defined Contribution Retirement System was established in 1995 amid concerns and findings by the Twenty-Third Guam Legislature that: (a) The Actuarial Valuation of the Retirement [Defined Benefit] Plan prepared by Deloitte & Touche as of September 30, 1993, expressed concern that benefit levels are rather excessive in comparison to most other government retirement systems. (b) Benefit levels and retirement policy should be reviewed and benefit levels should be adjusted in order to address specific inequities, excessiveness, and desired policy objectives. (c) In establishing benefits for a new plan, generally accepted retirement income levels standards should be observed and the details of any new plan must be considered thoroughly and a comprehensive education and implementation plan must be developed. Public Law 23-42:1. I Mina'Trentai Tres Na Liheslaturan Guåhan has continued to review benefit levels and retirement policy in light of retirements of participants in the Defined Contribution Retirement System, as well as the Defined Benefit Plan. The review by I Mina'Trentai Tres Na Liheslaturan Guåhan involved consideration of a comprehensive and detailed study by Milliman, Inc. of alternative retirement plans and arrangements, based upon the Actuarial Valuation under the Defined Benefit Plan as of September 30, 2008, and updated through September 30, 2014. The alternatives, which included cost comparisons between Social Security, the Defined Contribution Retirement System, Social Security plus the Defined Contribution Retirement System, and the Defined Benefit 1.75 Plan (formerly known as the Hybrid Plan) herein, were prepared as part of an analysis of funding requirements and retirement benefit levels of participants in the Defined Contribution Retirement System and future government of Guam employees. I Mina'Trentai Tres Na Liheslaturan Guåhan finds that an alternative retirement program that combines a defined benefit “floor” of benefits, along with a mandatory salary reduction deferred compensation program, is necessary to provide a reasonable opportunity for current government employees to maintain their standards of living in retirement, while also balancing the government’s budgetary needs and obligations to active and retired government employees and their survivors. I Mina'Trentai Tres Na Liheslaturan Guåhan finds that this combination of benefits under an alternative “Defined Benefit 1.75 Retirement System” is reasonable and prudent to balance the needs of government employees as well as the government as a whole. I Mina'Trentai Tres Na Liheslaturan Guåhan intends to establish a new "Defined Benefit 1.75 Retirement System" to be comprised of participation in: (1) the preexisting Retirement Fund that shall provide for an unreduced retirement defined benefit equal to one and seventy-five hundredths percent (1.75%) of an employee’s average annual salary for each year of credited service at retirement age 62; and (2) the preexisting Deferred Compensation Program providing for a mandatory pre-tax salary reduction contribution equal to one percent (1%) of a member’s base salary. I Mina'Trentai Tres Na Liheslaturan Guåhan further intends to create and establish an alternative new retirement plan to provide for the secure, fair, and orderly retirement of the personnel of the government of Guam. The new retirement plan is intended to be a tax-qualified cash balance plan to be known as the government of Guam Retirement Security Plan (GRSP) which shall constitute a body corporate and all business of the GRSP shall be established in the name of the government of Guam Retirement Security Plan. The Board of Trustees created pursuant to Article 1, Chapter 8, Title 4 of the Guam Code Annotated shall administer the government of Guam Retirement Security Plan. The Board of Trustees may sue and be sued, contract and be contracted with and conduct all the business of the GRSP in the name of the government of Guam Retirement Security Plan. I Mina'Trentai Tres Na Liheslaturan Guåhan intends that beginning January 1, 2018, the GRSP and the Defined Contribution Retirement System shall be the retirement programs for all new employees whose employment commences on or after that date. After January 1, 2018, all new employees shall be automatically enrolled into the GRSP but will have sixty (60) days from the date of hire to elect to participate in the Defined Contribution Retirement System. I Mina'Trentai Tres Na Liheslaturan Guåhan further intends to allow participants with interests in the Defined Contribution Retirement System to timely elect to participate in, and in certain circumstances, transfer their account balances to, either the “Defined Benefit 1.75 Retirement System” or the GRSP, in accordance with the adoption of regulations promulgated by the Board of Trustees of the Retirement Fund pursuant to the Administrative Adjudication Act. I Mina'Trentai Tres Na Liheslaturan Guåhan further intends that, effective January 1, 2018, members’ and employer contributions to members’ accounts in the Defined Contribution Retirement System shall be increased from five percent (5%) to six and two tenths percent (6.2%). Section 2. Summary of Key Provisions in New Defined Benefit 1.75 Retirement System. A new Article 5 as described in Section 3 of this Act shall be added to Title 4 (Public Officers and Employees), Chapter 8 (Retirement of Public Employees), of the Guam Code Annotated, to create a “Defined Benefit 1.75 Retirement System” that is comprised of participation in the preexisting Retirement Fund and Deferred Compensation Program. Subsections A to E of this Section 2 are provided only as a convenient summary of the key provisions of the Defined Benefit 1.75 Retirement System, and are not meant to be codified in Chapter 8, Title 4 of the Guam Code Annotated. A. Voluntary Participation in the Defined Benefit 1.75 Retirement System (1) New Employees With limited exceptions, new employees whose employment commences between April 1, 2017 and December 31, 2017, inclusive, may elect, during the “Election Window” commencing on April 1, 2017 and ending on December 31, 2017, to participate in the Defined Benefit 1.75 Retirement System effective as of January 1, 2018. (2) Former Employees Who Are Reemployed (a) Reemployed employees who have retired under government of Guam sponsored plans are prohibited from participating in the Defined Benefit 1.75 Retirement System. All reemployed employees who retired under the Defined Benefit Plan, the Defined Contribution Retirement System, or the Defined Benefit 1.75 Retirement System are required to participate in the Defined Contribution Retirement System. (b) Reemployed employees who were members of the Defined Benefit Plan and did not refund (withdraw) their employee contributions upon separation from service shall resume membership in the Defined Benefit Plan. (c) Reemployed employees (prior to January 1, 2018) with interests in the Defined Contribution Retirement System shall participate in the Defined Contribution Retirement System, unless such eligible employees timely elect to participate in the Defined Benefit 1.75 Retirement System (and in some cases, transfer their account balances) under the following circumstances: i. Such eligible employees who are reemployed prior September 30, 2017, may, during the “Election Window” commencing on April 1, 2017 and ending on September 30, 2017 (October 31, 2017 for reemployment commencing during the month of September 2017) elect to participate in the Defined Benefit 1.75 Retirement System, and transfer the required portion of their Defined Contribution Retirement System account balances to the Retirement Fund for credited service effective as of January 1, 2018. ii. Such eligible employees who are reemployed between October 1, 2017 and December 31, 2017, inclusive, may, within thirty (30) days of their reemployment, elect to participate in the Defined Benefit 1.75 Retirement System effective as of January 1, 2018, but may not transfer their account balances in the Defined Contribution System to the Retirement Fund for credited service. (3) Disabled Participants Receiving Ancillary Benefits Under Article 4 Disabled participants in the Defined Contribution Retirement System who are receiving pre-retirement disability benefits under Title 4, Chapter 8, Article 4 of the Guam Code Annotated prior to December 31, 2017, may, during the “Election Window” commencing on April 1, 2017 and ending on December 31, 2017, elect to participate in the Defined Benefit 1.75 Retirement System, and transfer their account balance in the Defined Contribution Retirement System to the Retirement Fund for credited service, to be effective upon the later of (A) January 1, 2018, or (B) termination of their disability benefits in connection with their retirement or their reemployment with the government of Guam. (4) Current Employees Employees participating in the Defined Contribution Retirement System on March 31, 2017 may, during the Election Window” commencing April 1, 2017 and ending on September 30, 2017, elect to participate in the Defined Benefit 1.75 Retirement System, and transfer the required portion of their Defined Contribution Retirement System account balances to the Retirement Fund thereunder, effective as of January 1, 2018. If the participant’s account has been reduced by any withdrawal, the participant may repay the withdrawn amounts, plus interest, in order to reinstate full credited service under the Defined Benefit 1.75 Retirement System. B. Employee Contributions (1) Mandatory pre-tax employee contributions equal to nine and five tenths percent (9.5%) of the member’s base salary shall be made to the Retirement Fund and subject to the management and administration of the Retirement Fund under Article 1, Chapter 8, Title 4, of the Guam Code Annotated. (2) Mandatory pre-tax employee contributions equal to one percent (1%) of the member’s base salary shall be made to the Deferred Compensation Program under Article 3, Chapter 8, Title 4 of the Guam Code Annotated. C. Employer Contributions Employer contributions on behalf of members under the Retirement Fund shall be in accordance with applicable contribution requirements described in § 8137, Article 1, Chapter 8, Title 4 of the Guam Code Annotated. D. Member Retirement Benefits (1) The Retirement Fund shall provide a retirement annuity in an amount equal to one and seventy-five hundredths percent (1.75%) of a member’s average annual salary (the average of the highest three (3) annual base salaries, and where non-base compensation is excluded) for each year of credited service (subject to a minimum of One Thousand Two Hundred Dollars ($1,200) per year, and a maximum of eighty-five percent (85%) of average annual salary). The retirement annuity shall be subject to annual increase based on specified fixed dollar increments. A member shall be eligible to receive an unreduced retirement annuity at age sixty-two (62) (where the maximum benefit of eighty-five percent (85%) of average annual salary is achieved with forty-nine (49) years of credited service), and shall be eligible to receive a reduced retirement annuity at age fifty-five (55) and twenty-five (25) years of credited service (subject to a reduction of five tenths percent (.5%) per month for each month under age sixty-two (62)). (2) The Deferred Compensation Program shall provide a retirement benefit equal to a member’s account balance at the time of distribution, which account balance may be paid in the form of annuity, installment, or lump sum payments as may be elected by the member. E. Survivor And Disability Benefits (1) Under the Retirement Fund, surviving spouses shall be eligible for survivor benefits equal to sixty percent (60%) of a member’s retirement annuity (minimum of One Thousand Two Hundred Dollars ($1,200) per year). Surviving minor children shall be eligible for surviving child benefits equal to Two Thousand Eight Hundred Eighty Dollars ($2,880) per child (up to Fourteen Thousand Four Hundred Dollars ($14,400) in the aggregate). An additional lump sum benefit of One Thousand Dollars ($1,000) also is available. The survivor annuity (but not annuity for surviving minor children) shall be subject to annual increase based on specified fixed dollar increments. (2) Under the Retirement Fund, a member shall be eligible for a disability retirement annuity equal to fifty percent (50%) of the member’s average annual salary. The disability annuity shall be subject to annual increase based on specified fixed dollar increments. Section 3. Statutory Provisions Establishing Defined Benefit 1.75 Retirement System. A new Article 5 is hereby added to Chapter 8, Title 4 of the Guam Code Annotated, to read as follows: “Article 5 Defined Benefit 1.75 Retirement System § 8501. Definitions. As used in this Article, unless the context otherwise requires: (a) Actuarial Cost of Credited Service means a percentage of historical base salary corresponding to the service for which a member’s account is credited with employer contributions under the Defined Contribution Retirement System through the date preceding the member’s transfer to the Defined Benefit 1.75 Retirement System. The applicable percentage shall be specified by the Board based on an actuarial review of the cost of credited service. The same percentage shall apply to all members. (b) Board of Trustees or Board means the Board of Trustees of the government of Guam Retirement Fund, which is responsible for the direction and operation of the affairs and business of the Defined Benefit 1.75 Retirement System. (c) Code means the United States Internal Revenue Code of 1986, as amended, and corresponding references to the Guam Territorial Income Tax Code, as may be appropriate. (d) Deferred Compensation Program means the government of Guam Deferred Compensation Program established and operated in accordance with Article 3 of this Chapter and inclusive of modifications in the terms and conditions of the Deferred Compensation Program applicable to the members of the Defined Benefit 1.75 Retirement System under this Article 5. (e) *Defined Contribution System* means the government of Guam Defined Contribution Retirement System established and operated in accordance with Article 2 of this Chapter and inclusive of modifications in the terms and conditions of the Defined Contribution Retirement System applicable to the members of the Defined Benefit 1.75 Retirement System under this Article 5. (f) *Director* means the Director of the government of Guam Retirement Fund as appointed by the Board in accordance with § 8140 of Article 1, Chapter 8, Title 4 of the Guam Code Annotated. (g) *Employer* means each and every line department or agency of the Executive Branch, every autonomous and semi-autonomous agency or instrumentality, every public corporation, every educational institution, whether secondary or post-secondary, the Legislative Branch, the Judicial Branch, the Public Defender Corporation, and every public entity hereafter to be created by law within Guam that has employed or employs a member. (h) *Excess Account Balance* means the amount by which a member’s account balances in § 8208 (Member’s Contributions) and § 8209.1(a) (Rollover of Member’s Contributions from § 8164(a)) of this Chapter exceeds the member’s Actuarial Cost of Credited Service. (i) *Existing Retirement System* means the government of Guam Retirement Fund established and operated in accordance with Article 1 of this Chapter and exclusive of modifications in the terms and conditions of the Existing Retirement System applicable to the members of the Defined Benefit 1.75 Retirement System under this Article 5. (j) *Defined Benefit 1.75 Retirement System or DB 1.75 Plan* means the government of Guam Defined Benefit 1.75 Retirement System established and operated under this Article 5. The Defined Benefit 1.75 Retirement System *shall* consist of the mandated and coordinated participation of members in two separate and preexisting retirement programs: (1) the Retirement Fund established and maintained under Article 1, Chapter 8, Title 4 of the Guam Code Annotated; inclusive of the modifications to the terms and conditions of the Retirement Fund for Defined Benefit 1.75 Plan members as set forth in this Article 5; and (2) the Deferred Compensation Program established and maintained under Article 3, Chapter 8, Title 4 of the Guam Code Annotated. (k) *Member or Defined Benefit 1.75 Plan member* means any person who meets the eligibility requirements for membership in the Defined Benefit 1.75 Retirement System as described in § 8502 and participates in the Defined Benefit 1.75 Retirement System. (l) *Retirement Fund* means the government of Guam Retirement Fund established and operated in accordance with Article 1 of this Chapter and inclusive of the modifications in the terms and conditions of the Existing Retirement System applicable to members of the Defined Benefit 1.75 Retirement System under this Article 5. § 8502. Establishment of the Government of Guam Defined Benefit 1.75 Retirement System; Membership in DB 1.75 Retirement System. (a) Defined Benefit 1.75 Retirement System. Beginning January 1, 2018, the government of Guam Defined Benefit 1.75 Retirement System shall be established hereunder and System shall be comprised of membership under the government of Guam Retirement Fund established under Article 1 of this Title and the Deferred Compensation Program established under Article 3 of this Title. The Defined Benefit 1.75 Retirement System does not comprise a separate fund or trust for members thereunder, but is the coordinated participation on a mandatory basis at specified benefit levels in the Retirement Fund and voluntary basis at specified benefit levels in the Deferred Compensation Program. Beginning January 1, 2018, the Defined Benefit 1.75 Retirement System shall be the retirement program for employees who timely elect to participate in the Defined Benefit 1.75 Retirement System. Members of the Defined Contribution System whose employment continues beyond December 31, 2017, shall continue to contribute to and participate in the Defined Contribution System without change in provisions or benefits, except as provided from time to time under the Defined Contribution System. (b) Membership in Retirement Fund (1) Defined Benefit 1.75 Plan Election by New Employees in Defined Contribution System. All new employees whose employment commences between April 1, 2017 and December 31, 2017, inclusive, and who satisfy the eligibility requirements for membership under §§ 8105 and 8106, may elect to participate in the Retirement Fund as “Defined Benefit 1.75 Plan members” in accordance with such eligibility requirements. No additional new employees shall be admitted to the Existing Retirement System on or after January 1, 2018, except as provided hereunder or provided from time to time under the Existing Retirement System. Members of the Existing Retirement System whose employment continues beyond December 31, 2017, shall continue to contribute and participate in the Existing Retirement System without change in provisions or benefits, except as provided from time to time under the Existing Retirement System. Except for those members who elect to participate in the Defined Benefit 1.75 Retirement System pursuant to § 8502(b)(2), any new employee hired after January 1, 2018 and who elects to participate in the Defined Contribution Retirement System and current members of the Defined Contribution System whose employment continues beyond December 31, 2017, shall continue to contribute and participate in the Defined Contribution System without change in provisions or benefits, except as provided from time to time under the Defined Contribution System. (2) Defined Benefit 1.75 Plan Election by Current Employees in Defined Contribution System. All employees who are members in the Defined Contribution System on March 31, 2017 shall be eligible to elect on a voluntary basis to become Defined Benefit 1.75 Plan members effective as of January 1, 2018, and to terminate active participation in the Defined Contribution System as of such date, by making the appropriate election with the Defined Benefit 1.75 Retirement System in the form and manner as determined by the Board during the election period commencing on April 1, 2017 and ending on September 30, 2017. After having made such election to become a Defined Benefit 1.75 Plan member, the member may not change such election or again become an active member of the Defined Contribution System. The failure to make such election shall be deemed to constitute an election by the member to remain as an active member under the Defined Contribution System. Such election shall not apply to members in the Defined Contribution System who have retired or otherwise terminated employment from government service and who are not employed by the government of Guam at the time of the election and as of the January 1, 2018, effective date of participation in the Defined Benefit 1.75 Plan. (3) Reemployment of Existing Retirement System Member. Any employee who is a member in the Existing Retirement System, who leaves government service and who is later reemployed after December 31, 2017 by the government of Guam, shall become an active member in the Existing Retirement System upon reemployment if such employee has not received a refund of contributions resulting in ineligibility for membership under § 8130(b), and if such employee otherwise meets the eligibility requirements under the Existing Retirement System. (4) Reemployment of Defined Contribution System Member. Any employee who is a member maintaining an interest in the Defined Contribution System, who leaves government service and who is later reemployed by the government of Guam prior to September 30, 2017, shall become an active member in the Defined Contribution System upon reemployment if such employee otherwise meets the eligibility requirements under the Defined Contribution System. (A) However, if such a member is reemployed during the period commencing on April 1, 2017, and ending on September 30, 2017, then: (i) the member shall be eligible to elect on a voluntary basis to become a member of the Defined Benefit 1.75 Plan if such member otherwise meets the eligibility requirements for membership under §§ 8105 and 8106; (ii) the election period for this election shall be the period commencing on April 1, 2017, and ending on September 30, 2017 (or October 31, 2017 for members reemployed during the month of September 2017), and the effective date of the member’s membership in the Defined Benefit 1.75 Plan shall be January 1, 2018; and (iii) the member’s account under the Defined Contribution System shall be subject to transfer to the Defined Benefit 1.75 Retirement System in accordance with §§ 8503(d)(2) and 8504. (B) Further, if such a member is reemployed between October 1, 2017 and December 31, 2017, inclusive: (i) the member shall be eligible to elect on a voluntary basis to become a member of the Defined Benefit 1.75 Plan if such member otherwise meets the eligibility requirements for membership under §§ 8105 and 8106; (ii) the election period for such election shall be the thirty (30) day period beginning on the date of reemployment, and the effective date of the member’s membership in the Defined Benefit 1.75 Plan shall be the date of reemployment; and (iii) the member’s account under the Defined Contribution System shall not be subject to transfer to the Retirement Fund. (5) Reemployment of Defined Contribution System Member on Disability. Notwithstanding § 8502(b)(4), a member of the Defined Contribution System who had incurred a disability and at any time been eligible to receive any benefits provided under any long-term disability insurance policy issued pursuant to § 8213 or Article 4 of this Title shall not be eligible for membership under the Defined Benefit 1.75 Retirement System upon reemployment, but such member who satisfies the eligibility requirements for membership under §§ 8206 and 8207 at such time shall participate in the Defined Contribution System in accordance with such eligibility requirements. However, in the case of a member of the Defined Contribution System who is receiving disability benefits under § 8213 or Article 4 of this Title on or before September 30, 2017, such member shall be eligible to elect on a voluntary basis to become a member of the Defined Benefit 1.75 Plan in the event of the member’s reemployment or retirement on or after January 1, 2018. For this purpose, the election period for this election shall be the period commencing on April 1, 2017, and ending on September 30, 2017 (or October 31, 2017, for employees who commence receiving disability benefits during the month of September 2017), and the effective date of the member’s membership in the Defined Benefit 1.75 Plan shall be the later of: (A) January 1, 2018, or (B) the date of the member’s reemployment or retirement. (6) Reemployment of government of Guam Retiree. Any employee who retired under the Existing Retirement System, the Defined Contribution System, or the Defined Benefit 1.75 Retirement System, shall participate in the Defined Contribution Plan upon reemployment. (c) Membership in Deferred Compensation Program. Defined Benefit 1.75 Plan members shall participate in the Deferred Compensation Program effective as of the date on which they commence participation in the Retirement Fund. (d) Membership in Welfare Benefit Plans. Defined Benefit 1.75 Plan members shall not be eligible to participate in the welfare benefit plans established and maintained under Article 4 of this Title. As such, members of the Defined Contribution System who elect to become Defined Benefit 1.75 Plan members pursuant to § 8502(b) shall terminate participation in such welfare benefit plans effective as of the date on which they commence participation in the Defined Benefit 1.75 Retirement System. (e) Applicability of Articles 1 through 3, Chapter 8. Except as otherwise provided hereunder, with respect to Defined Benefit 1.75 Plan members who participate in the Retirement Fund, Defined Contribution System, and Deferred Compensation Program in accordance with the Defined Benefit 1.75 Retirement System provisions under this Article 5, the provisions of Articles 1 through 3 of this Chapter 8, respectively, shall be applicable to Defined Benefit 1.75 Plan members in a manner no different than the application to members who are not Defined Benefit 1.75 Plan members. § 8503. Defined Benefit 1.75 Plan Member Basic Retirement Annuity (a) Amount of Basic Retirement Annuity. Notwithstanding the otherwise applicable formula under § 8122 or other successor provision, the basic retirement annuity payable to a Defined Benefit 1.75 Plan member under the Retirement Fund shall be the following: an amount equal to one and seventy-five hundredths percent (1.75%) of average annual salary for each year of credited service; no basic retirement annuity shall exceed eighty-five percent (85%) of average annual salary; and the basic retirement annuity shall not, in any case, be less than One Thousand Two Hundred Dollars ($1,200) per year per member. For purposes of defining “salary” and “average annual salary” under § 8104(i) and (j), respectively, with respect to the determination of the basic retirement annuity payable to a Defined Benefit 1.75 Plan member, the term “salary” shall mean the member’s base salary excluding all non-base compensation. (b) Automatic Increases in Annuity for Basic Retirement Annuity. Any Defined Benefit 1.75 Plan member receiving a basic retirement annuity under the Retirement Fund shall receive each year on the anniversary date of the member’s retirement or entitlement, an automatic “sliding scale” increase in the member’s annual annuity as applicable under the Retirement Fund pursuant to § 8122 or other successor provision. (c) Retirement. Notwithstanding the otherwise applicable retirement requirements under §§ 8119 through 8120.1 or other successor provisions, a Defined Benefit 1.75 Plan member may retire on a service retirement annuity under the Retirement Fund, upon written application to and approval by the Board; provided that such member shall have attained at least sixty-two (62) years of age and has completed five (5) years of service. However, at the option of the Defined Benefit 1.75 Plan member, whether active or inactive, such member may retire after (1) attaining at least fifty-five (55) years of age and (2) completing twenty-five (25) years of service, in which case the retirement annuity for such member shall be reduced one half (1/2) of one percent (1%) for each month such member is under the age of sixty-two (62) years at such time of retirement, from the amount of the retirement annuity determined for such member as of his attainment of age sixty-two (62). (d) Credited Service for Transfers from Defined Contribution System (1) Transfer of Account to Defined Benefit 1.75 Retirement System. With respect to a member in the Defined Contribution System on March 31, 2017 who timely elects to be a member in the Defined Benefit 1.75 Retirement System effective as of January 1, 2018, in accordance with the election procedures under § 8502(b)(2), the member’s account balance under the Defined Contribution System shall be transferred to the Defined Benefit 1.75 Retirement System, in accordance with § 8504, effective as of January 1, 2018. Further, with respect to a member in the Defined Contribution System who is reemployed by the government of Guam during the period between April 1, 2017 and September 30, 2017, inclusive, and who becomes a member in the Defined Benefit 1.75 Retirement System effective as of January 1, 2018, in accordance with the election procedures under § 8502(b)(4), the member’s account balance under the Defined Contribution System shall be transferred to the Defined Benefit 1.75 Retirement System, in accordance with § 8504, effective as of January 1, 2018. Finally, with respect to a member in the Defined Contribution System who is receiving disability benefits and who becomes a member in the Defined Benefit 1.75 Retirement System upon reemployment or retirement in accordance with § 8502(b)(5), the member’s account balance under the Defined Contribution System shall be transferred to the Defined Benefit 1.75 Retirement System, in accordance with § 8504, effective as of the effective date of the member’s membership in the Defined Benefit 1.75 Plan as described in § 8502(b)(5). In these cases, as of the effective date of the transfer of a member’s account from the Defined Contribution System to the Defined Benefit 1.75 Retirement System, such member’s membership in the Defined Contribution System shall terminate. The transfer of a member’s account from the Defined Contribution System to the Defined Benefit 1.75 Retirement System attributable to the transfer of Member Contributions pursuant to § 8208, and Member’s Contribution Reserve and Transfer Incentive Reserve pursuant to § 8209.1(a) and (b), shall be made in accordance with § 8504. The transfer of a member’s Employer Account to the Defined Benefit 1.75 Retirement System attributable to the transfer of Employer’s Contributions pursuant to § 8209(a) (whether the account reflecting such employer contributions are vested or unvested, and inclusive of unvested suspense accounts) shall be pursuant to § 8503(d)(2). Any Ancillary Benefit Account maintained under the Defined Contribution System on behalf of the member as described in § 8201(n) shall not be subject to transfer. (2) Defined Contribution System Credited Service. Effective as of the effective date of the transfer of the member’s account from the Defined Contribution System to the Retirement Fund under this § 8503(d), the service for which the member’s account is credited with employer contributions under the Defined Contribution System (including the service under the Retirement Fund attributable to the employee contributions previously transferred from the Retirement Fund to the Defined Contribution System pursuant to the member’s election under § 8207), shall be credited to the member for purposes of determining the member’s years of credited service and basic retirement annuity under the Retirement Fund in accordance with § 8503. In connection with credited service transferred from the Defined Contribution System to the Retirement Fund under § 8503(d)(1), a member’s § 8209(a) Employer’s Contribution account (whether the account reflecting such employer contributions are vested or unvested, and inclusive of unvested suspense accounts) shall be transferred to the member’s § 8164(b) account (Employer’s Contribution Reserve) under the Retirement Fund. (3) Credited Service for Repayment of Defined Contribution System Contributions. In the event that the Defined Benefit 1.75 Plan member’s account under the Defined Contribution System was previously reduced by the member’s withdrawal of an amount from the member’s account that is attributable to contributions during the member’s active participation in the Defined Contribution System, the member shall be allowed to repay to the Retirement Fund the amount of the withdrawal, adjusted for interest during the period commencing on the date of the withdrawal and ending on the date of the repayment, which repayment must be made in any combination of the following: a single payment, transfer of Excess Account Balance, or installments to the Retirement Fund in accordance with the Article 1 of Chapter 3, Division 1, Title 2 of the Guam Administrative Rules, as amended. If such withdrawn portion of the member’s account is not timely repaid in full to the Retirement Fund, then the service that otherwise would be credited under the Retirement Fund for service during the member’s active participation in the Defined Contribution System shall be reduced to account for the service to which the withdrawal relates, in accordance with rules, regulations, and procedures as promulgated or approved by the Board. (4) Credited Service for Repayment of Prior Retirement Fund Contributions. In the event that the member’s account under the Defined Contribution System was previously reduced by the member’s withdrawal of an amount from the member’s account that is attributable to the prior transfer of employee contributions from the Retirement Fund to the Defined Contribution System (specifically, considering only the portion of the account derived from the transferred Member’s Contribution Reserve, and not the Employer’s Contribution Reserve) pursuant to the member’s election under § 8207, the member shall be allowed to repay to the Retirement Fund the amount of the withdrawal, adjusted for interest during the period commencing on the date of the withdrawal and ending on the date of the repayment, which repayment must be made in any combination of the following: a single payment, transfer of Excess Account Balance, or installments to the Retirement Fund in accordance with the Article 1 of Chapter 3, Division 1, Title 2 of the Guam Administrative Rules, as amended. If such withdrawn portion of the member’s account is not timely repaid in full to the Retirement Fund, then the service that otherwise would be credited under the Retirement Fund for service during the member’s prior participation in the Retirement Fund shall be reduced to account for the service to which the withdrawal relates in accordance with rules, regulations, and procedures as may be promulgated or approved by the Board. § 8504. Transfer of Member Accounts from Defined Contribution System; Transfer of Excess Account Balance, If Any. (a) In connection with credited service transferred from the Defined Contribution Retirement System to the Retirement Fund under § 8503(d)(1), a member’s Member Account balances in the Defined Contribution System shall be transferred to the Defined Benefit 1.75 Retirement System in accordance with this Section. (b) An amount equal to the lesser of a member’s: (1) § 8208 and § 8209.1(a) account balances, if any; or (2) actuarial cost of credited service, shall be transferred to the Defined Benefit 1.75 Retirement System as set forth in this § 8504(b). The actuarial cost of credited service for a member transferring to the Defined Benefit 1.75 Plan shall be funded first from the member’s § 8209.1(a) account, if any, and applied to the member’s § 8164(a) Member’s Contribution Reserve; any further amounts needed to fund up to the actuarial cost of credited service shall be funded next from the member’s § 8208 pre-tax account and applied to a pre-tax subaccount in the member’s § 8164(a) Member’s Contribution Reserve. Excess Account Balance, if any, attributable to a member’s § 8209.1(a) account shall be transferred to the member’s post-tax account in the Deferred Compensation Program, except for amounts designated by the member to be applied to repay prior partial withdrawals from the member’s account in accordance with § Excess Account Balance, if any, attributable to a member’s § 8208 Member Contribution Account shall be transferred to the member’s pre-tax account in the Deferred Compensation Program, except for amounts designated by the member to be applied to repay prior partial withdrawals from the member’s account in accordance with § 8503(d)(3). (c) An amount equal to the member’s § 8209.1(b) account (Rollover Employer’s Contributions from § 8164(b), also referred to as the Transfer Incentive Reserve) shall be transferred to the member’s pre-tax account in the Deferred Compensation Program, except for amounts designated by the member to be applied to repay prior partial withdrawals from the member’s account in accordance with § 8503(d)(3). § 8505. Defined Benefit 1.75 Plan Member Disability Retirement Annuity. (a) Amount of Disability Retirement Annuity. Notwithstanding the otherwise applicable formula under § 8125 or other successor provision, the amount of basic disability retirement annuity for a Defined Benefit 1.75 Plan member under the Retirement Fund shall be fifty percent (50%) of average annual salary based on the average three (3) highest annual salaries received the member during that member’s years of credited service. For purposes of defining “salary” and “average annual salary” under § 8104(i) and (j), respectively, with respect to the determination of the basic disability retirement annuity payable to a Defined Benefit 1.75 Plan member, the term “salary” shall mean the member’s base salary excluding all non-base compensation. (b) Automatic Increases in Annuity for Disability Retirement Annuity. Any Defined Benefit 1.75 Plan member receiving a recomputed disability retirement annuity under the Retirement Fund shall receive each year on the anniversary date of the member’s retirement or entitlement, an automatic sliding scale increase in the member’s annual basic disability retirement annuity as applicable under the Retirement Fund pursuant to § 8129 or other successor provision. § 8506. Defined Benefit 1.75 Plan Member Death and Survivors Benefits. The death benefit and survivor annuity provisions in connection with a member’s death under §§ 8131 through 8135 or other successor provisions shall be applicable to Defined Benefit 1.75 Plan members in a manner no different than the application to members who are not Defined Benefit 1.75 Plan members. § 8507. Defined Benefit 1.75 Plan Member Contributions to Fund. The member contribution provisions under § 8136 or other successor provision shall be applicable to Defined Benefit 1.75 Plan members in a manner no different than the application to members who are not Defined Benefit 1.75 Plan members. However, notwithstanding that the contributions by Defined Benefit 1.75 Plan members are designated as member contributions and shall be administered as member contributions under § 8136, such contributions shall be on a mandatory basis deducted from the member’s base salary and paid by the employer in lieu of contributions by the member, and shall constitute pre-tax “pick-up” employer contributions for purposes of determining the income tax treatment of such contributions under Section 414(h) of the United States Internal Revenue Code. § 8508. Deferred Compensation Program. In accordance with § 8308, the employer shall automatically enroll members and may deduct and credit Defined Benefit 1.75 Plan member contributions under the Deferred Compensation Program in an amount equal to one percent (1%) of the member’s base salary. However, notwithstanding that the contributions by Defined Benefit 1.75 Plan members are designated and shall be administered as member contributions under § 8308, such contributions shall be on a voluntary basis deducted from the member’s base salary and paid by the employer in lieu of contributions by the member, and shall constitute pre-tax “pick-up” employer contributions for purposes of determining the income tax treatment of such contributions under Section 414(h) of the United States Internal Revenue Code.” Section 4. Rules and Regulations. No later than March 31, 2017, the Board of Trustees of the Retirement Fund shall approve such plan documents, rules, regulations, administrative procedures and forms that it may deem necessary and appropriate to implement the Defined Benefit 1.75 Retirement System established by this Section. Section 5. Framework for the Creation, Approval, and Adoption of a Cash Balance Plan to be known as the Guam Retirement Security Plan (GRSP). No later than March 31, 2017, the Board of Trustees of the Retirement Fund shall create, approve, and adopt a Cash Balance Plan to be known as the Guam Retirement Security Plan (GRSP), plan documents, rules, regulations, administrative procedures, and forms that it may deem necessary and appropriate to implement the GRSP pursuant to the Administrative Adjudication Act in accordance with the following provisions: (1) Membership in Guam Retirement Security Plan. (a) Guam Retirement Security Plan. Upon creation, approval, and adoption of a GRSP by the Board of Trustees of the Retirement Fund and beginning on or after April 1, 2017, the government of Guam GRSP shall be established in accordance with the regulations created, adopted, and approved by the Board of Trustees of the Retirement Fund and shall be the single retirement program for all new employees whose employment commences on or after December 31, 2017, unless such employee elects to participate in the Defined Contribution Retirement System within sixty (60) days of the employee’s hire date. Members of the Defined Contribution System whose employment continues beyond June 30, 2017 shall continue to contribute to and participate in the Defined Contribution System without change in provisions or benefits, except for members who elect to become GRSP members or as provided from time to time under the Defined Contribution System. (b) Membership in Guam Retirement Security Plan. (i) New Employees. All new employees whose employment commences between April 1, 2017 and December 31, 2017, and who satisfy the eligibility requirements for membership in accordance with the GRSP regulations as created, approved, and adopted by the Board of Trustees of the Retirement Fund, may participate in the Retirement Fund as GRSP members in accordance with such eligibility requirements. Beginning January 1, 2018, all new employees whose employment commences on or after January 1, 2018 are automatically enrolled in the GRSP retirement program unless the employee elects to participate in the Defined Contribution System within sixty (60) days from the employee’s date of hire. New employees electing to participate in the Defined Contribution Retirement System shall contribute to and participate in the Defined Contribution Retirement System as provided in Article 2 of Title 4, Guam Code Annotated. No additional new employees shall be admitted to the Existing Retirement System on or after December 31, 2017, except as provided from time to time under the Existing Retirement System. Members of the Existing Retirement System whose employment continues beyond December 31, 2017, shall continue to contribute and participate in the Existing Retirement System without change in provision or benefits, except as provided from time to time under the Existing Retirement System. Members of the Defined Contribution System whose employment continues beyond December 31, 2017, shall continue to contribute and participate in the Defined Contribution System without change in provisions or benefits, except as provided from time to time under the Defined Contribution System. (ii) Guam Retirement Security Plan Election by Current Employees in Defined Contribution System. All employees who are members in the Defined Contribution System on March 31, 2017, shall be eligible to elect on a voluntary basis to become GRSP members effective as of January 1, 2018, and to terminate active participation in the Defined Contribution System as of such date, by making the appropriate election with the GRSP in the form and manner as determined by the Board during the election period commencing on April 1, 2017 and ending on September 30, 2017. After having made such election to become a GRSP member, the member may not change such election or again become an active member of the Defined Contribution System. The failure to make such election shall be deemed to constitute an election by the member to remain as an active member under the Defined Contribution System or the Defined Benefit 1.75 Retirement System. Such election shall not apply to members in the Defined Contribution System who have retired or otherwise terminated employment from government service and who are not employed by the government of Guam at the time of the election and as of the January 1, 2018, effective date of participation in the GRSP. (iii) Reemployment of Existing Retirement System Member. Any employee who is a member in the Existing Retirement System, who leaves government service and who is later reemployed prior to December 31, 2017 by the government of Guam, shall become an active member in the Existing Retirement System upon reemployment if such employee has not received a refund of contributions resulting in ineligibility for membership under § 8130(b), and if such employee otherwise meets the eligibility requirements under the Existing Retirement System. However, if such employee has received a refund of contributions under § 8130, and if such employee otherwise meets the eligibility requirements for membership, then such employee shall become an active member in the GRSP upon reemployment. (iv) Reemployment of Defined Contribution System Member. Any employee who is a member maintaining an interest in the Defined Contribution System, who leaves government service and who is later reemployed prior to September 30, 2017, by the government of Guam, shall become an active member in the Defined Contribution System upon reemployment if such employee otherwise meets the eligibility requirements under the Defined Contribution System. (A) However, if such a member is reemployed during the period commencing on April 1, 2017, and ending on September 30, 2017, then: (aa) the member shall be eligible to elect on a voluntary basis to become a member of the GRSP if such member otherwise meets the eligibility requirements for membership; (bb) the election period for this election shall be the period commencing on April 1, 2017, and ending on September 30, 2017 (or October 31, 2017, for members reemployed during the month of September 2017), and the effective date of the member’s membership in the Defined Benefit 1.75 Retirement System shall be January 1, 2018; and (cc) the member’s account under the Defined Contribution System shall be subject to transfer to the GRSP in accordance with the regulations created, approved, and adopted by the Board of Trustees of the Retirement Fund. (B) Further, if such a member is reemployed after September 30, 2017: (aa) the member shall be eligible to elect on a voluntary basis to become a member of the GRSP if such member otherwise meets the eligibility requirements for membership; (bb) the election period for such election shall be the thirty (30) day period beginning on the date of reemployment, and the effective date of the member’s membership in the GRSP shall be the later of January 1, 2018 or the date of reemployment; and (cc) the member’s account under the Defined Contribution System shall not be subject to transfer to the Retirement Fund. (v) Reemployment of Defined Contribution System Member on Disability. Notwithstanding the above Section 1(b)(ii), a member of the Defined Contribution System who had incurred a disability and at any time been eligible to receive any benefits provided under any long-term disability insurance policy issued pursuant to § 8213 or Article 4 of this Title shall not be eligible for membership under the GRSP upon reemployment, but such member who satisfies the eligibility requirements for membership under §§ 8206 and 8207 at such time shall participate in the Defined Contribution System in accordance with such eligibility requirements. However, in the case of a member of the Defined Contribution System who is receiving disability benefits under § 8213 or Article 4 of Title 4 of the Guam Code Annotated during the period commencing April 1, 2017, and ending on September 30, 2017, such member shall be eligible to elect on a voluntary basis to become a member of the GRSP in the event of the member’s reemployment or retirement on or after January 1, 2018. For this purpose, the election period for this election shall be the period commencing on April 1, 2017, and ending on September 30, 2017 (or October 31, 2017, for members who commence receiving disability benefits during the month of September 2017), and the effective date of the member’s membership in the GRSP shall be the later of: (A) January 1, 2018, or (B) the date of the member’s reemployment or retirement. (vi) Reemployment of government of Guam Retiree. Any employee who retired under the Existing Retirement System, the Defined Contribution System, the Defined Benefit 1.75 Retirement System, or the GRSP shall participate in the Defined Contribution Plan upon reemployment. (2) Guam Retirement Security Plan Member Framework. (a) GRSP Member Contributions to Fund. All contributions by GRSP members shall be mandatory and equal to six and two tenths percent (6.2%) of base pay. Such reductions from base pay, although designated as member contributions, shall be deducted by the employer at the normal payroll intervals, shall be paid by the employer in lieu of contributions by the member, and shall be remitted within five (5) working days to the Retirement Fund. The employer shall deduct the member’s mandatory contributions required by this Section from member’s base pay on or after the first payroll interval following the latest of (i) the enactment of this Act, (ii) January 1, 2017, or (iii) a GRSP member’s transfer to the GRSP pursuant to the created, approved, and adopted regulations by the Board of Trustees of the Retirement Fund and contributions so deducted shall be treated as employer contributions in determining federal tax treatment under Section 414(h) of the United States Internal Revenue Code. The employer shall contribute or pay these member deducted contributions from the same source of funds that is used in paying base pay to the member. Member contributions deducted shall be treated for all purposes of the government of Guam Retirement Fund GRSP in the same manner and to the same extent as member contributions made prior to the date of deduction. All member contributions shall be immediately credited to member GRSP accounts pursuant to the created, adopted, and approved GRSP regulations by the Board of Trustees of the Retirement Fund. (b) Guam Retirement Security Plan Employer Contribution and Pay Credits. Each employer shall, pursuant to Section 5(2)(a), make a contribution to each GRSP member’s account pursuant to the created, adopted, and approved GRSP regulations by the Board of Trustees of the Retirement Fund that is equal to six and two tenths percent (6.2%) of such member’s base pay. In addition, each participating employer shall match the first six and two tenths percent (6.2%) of each member’s base pay, which shall be known as a “pay credit,” and shall be paid to the Fund and credited to such member’s GRSP account. Each participating employer shall ensure that its employer or member contributions are made within five (5) working days. In the case of an officer or an employee of the government of Guam, any unpaid employer contribution shall be a government debt, contracted as a result of a casual deficit in the government’s revenues, to be accorded preferred status over other expenditures. (c) Interest Credit. (i) The GRSP shall include a fixed “interest credit” of four percent (4%) annually toward GRSP member accounts, and such interest credit requirements shall be in accordance with the Internal Revenue Code requirements for a Cash Balance Plan to be a qualified retirement plan. (ii) The GRSP shall permit gains in excess of the “interest credit” of four percent (4%) to offset losses, in accordance with the Internal Revenue Code for requirements for a Cash Balance Plan to be a qualified retirement plan. (d) Rollover Authorization. The Board of Trustees of the Retirement Fund shall include a roll over authorization for GRSP member and employer contributions to either the GRSP or the Deferred Compensation account in the creation, adoption, and approval of such regulations. Such rollover authorization shall be in accordance with the Internal Revenue Code requirements for a Cash Balance Plan to be a qualified retirement plan. (e) Vesting Schedule. The Board of Trustees of the Retirement Fund shall include a vesting schedule that details vesting for contributions, to include but not be limited to members and employers contributions and interest credits. Such vesting schedule shall be in accordance with the Internal Revenue Code requirements for a Cash Balance Plan to be a qualified retirement plan. (3) The Board of Trustees of the Retirement Fund shall be authorized to ensure that any GRSP membership and framework requirements identified in this Section shall be subject to change at the Board’s discretion, only if such membership and framework requirements do not conform to Internal Revenue Service regulations for Cash Balance Plan qualifications. Section 6. Social Security Option. If the government of Guam is authorized to extend Social Security coverage to government of Guam employees on a prospective basis, whether through one (1) or several voluntary agreements or through a specific statutory provision authorizing such extension, then all employees hired on or after the effective date or dates from which such coverage is extended shall be enrolled into Social Security and shall not be eligible for the Defined Benefit 1.75 Retirement System or the Guam Retirement Security Plan. Section 7. § 8208 of Article 2, Chapter 8, Title 4, Guam Code Annotated is hereby amended to read: “§ 8208. Members’ Contributions. All contributions by the members shall be mandatory. From the operative date through December 31, 2017, contributions shall be equal to five percent (5%) of base pay. On and after January 1, 2018, contributions shall be equal to six and two tenths percent (6.2%) of base pay. Such reductions from base pay, although designated as member contributions, shall be deducted by the employer at the normal payroll intervals, shall be paid by the employer in lieu of contributions by the member, and shall be remitted within five working days to the insurance, annuity, mutual fund, or other qualified company or companies designated by the board to administer the operations of the Defined Contribution Retirement System. The employer shall deduct the member’s mandatory contributions required by this Section from member’s base pay on or after the first payroll interval following the latest of (i) the enactment of this Article (ii) October 1, 1995, or (iii) a member’s transfer to the Defined Contribution Retirement System pursuant to § 8207, and the contributions so deducted shall be treated as employer contributions in determining federal tax treatment under Section 414 (h) of the United States Internal Revenue Code. The employer shall contribute or pay these member deducted contributions from the same source of funds which is used in paying base pay to the member. Member contributions deducted shall be treated for all purposes of the government of Guam Retirement Fund Defined Contribution Retirement System in the same manner and to the same extent as member contributions made prior to the date of deduction. All member contributions shall be immediately credited to an account or accounts established for the benefit of the member under a trust agreement. A summary plan description shall be issued to each member setting forth the terms and conditions under which contributions are received, and the investment and retirement options available to the member. The board shall promulgate within ninety (90) days after enactment of the law, pursuant to § 8205 of this Article, rules defining the minimum requirements for the investment and retirement options, including but not limited to: 1. Lump sum distributions of members’ accounts which do not exceed an amount established by the board; 2. Joint and Survivor annuities; 3. Other annuity forms; 4. Variable annuities which gradually increase monthly retirement payments; provided, that said increased payments are funded solely by existing current value of the member’s account at the time the member’s retirement payments commence and not, to any extent, in a manner which would require additional employer or member contributions to any member’s account after retirement or after the cessation of employment; and 5. The instances in which, if any, distributions or loans can be made from this on account balances prior to having attained the age of fifty-five.” Section 8. § 8209(a) of Article 2, Chapter 8, Title 4, Guam Code Annotated is hereby amended to read: “§ 8209. Employer Contributions. (a) Each employer shall, pursuant to § 8208, make a contribution to each member’s account with respect to each member whose employment commenced on or after October 1, 1995, or who transfers to the Defined Contribution Retirement System pursuant to § 8207, which is equal to five percent (5%) of such member’s base pay. In addition, each participating employer shall match the first five percent (5%) of each member’s base pay. On and after January 1, 2018, these contributions herein shall be increased to six and two tenths percent (6.2%) of such member’s base pay. The amounts contributed herein shall vest in accordance with the vesting schedule set forth in of § 8210(c).” Section 9. Extension of Amortization Period (a) The first sentence of § 8137(b) of Article 1, Chapter 8, Title 4, Guam Code Annotated is hereby amended to read: “(b) Government Unfunded, Liability Amortization Cost. An amount resulting from the application of a rate percent of total salaries of all members which will amortize the remaining liability for prior service over a period of eighty-two (82) years following May 1, 1951.” (b) This Section 9 shall be effective January 1, 2018.” Section 10. Effective Date. Except as otherwise provided herein, this Act shall take effect upon enactment. Section 11. Severability. If any provision of this Act or its application to any person or circumstance is found to be invalid or contrary to law, such invalidity shall not affect other provisions or applications of this Act that can be given effect without the invalid provisions or applications, and to this end the provisions of this Act are severable.
Structure and mechanical properties of reactive sputtering CrSiN films Guangan Zhang\textsuperscript{a,b,*}, Liping Wang\textsuperscript{a}, S.C. Wang\textsuperscript{c}, Pengxun Yan\textsuperscript{b}, Qunji Xue\textsuperscript{a} \textsuperscript{a} State Key Laboratory of Solid Lubrication, Lanzhou Institute of Chemical Physics, Chinese Academy of Sciences, Lanzhou 730000, PR China \textsuperscript{b} School of Physics Science \& Technology, Lanzhou University, Lanzhou 730000, PR China \textsuperscript{c} National Centre for Advanced Tribology at Southampton (nCATS), School of Engineering Sciences, University of Southampton, SO17 1BJ, UK \textbf{Article history:} Received 27 May 2008 Received in revised form 16 November 2008 Accepted 17 November 2008 Available online 21 November 2008 \textbf{PACS:} 62.20.–x 62.20.Op 68.35.Gy 81.40.Pq 87.15.La \textbf{Keywords:} CrSiN films Magnetron sputtering Microstructure Mechanical properties \textbf{Abstract} CrSiN films with various Si contents were deposited by reactive magnetron sputtering using the co-deposition of Cr and Si targets in the presence of the reactive gas mixture. Comparative studies on microstructure and mechanical properties between CrN and CrSiN films with various Si contents were carried out. The structure of the CrSiN films was found to change from crystalline to amorphous structure as the Si contents increase. Amorphous phase of Si$_3$N$_4$ compound was suggested to exist in the CrSiN film. The growth of films has been observed from continuous columnar structure, granular structure to glassy-like appearance morphology with the increase of silicon content. The film fracture changed from continuous columnar structure, granular structure to glassy-like appearance morphology with the increase of silicon content. Two hardness peaks of the films as function of Si contents have been discussed. © 2008 Published by Elsevier B.V. \section*{1. Introduction} Although transitional metal nitride films, such as CrN and TiN, have attracted a lot of interests due to the high hardness, high melting point and high chemical stability [1–3], their potential properties have not been fully achieved and the applications are limited. To explore these potential mechanical and especially high-temperature properties, many efforts have focused on the development of complex hard film materials. Recently, it was found that the deposits of multilayer or nanocomposite films improved the mechanical properties [4–8]. Veprek [6] first reported the TiSiN films with hardness exceeding 70 GPa by chemical vapor deposition (CVD). The addition of Si to TiN films has been shown to refine the grains through the formation of a nanocomposite structure of nanocrystalline (nc) TiN grains in an amorphous (a) matrix of Si$_3$N$_4$ (nc-TiN/a-Si$_3$N$_4$) [6,8–10]. Subsequent investigations were followed on nc-MeN/a-Si$_3$N$_4$ (Me = Cr, Zr, Nb, Mo) nanocomposites [11,7,12–14]. This composite structure of thin amorphous matrix and the crystallites was suggested to hinder the crack formation and propagation. However, there is still a dispute on the effects of Si addition to MeN on the bonding structure, crystalline structure and texture of the nanocomposite MeSiN films. The previous investigations on CrSiN films lack systematic research on the microstructural and mechanical properties by addition of Si [11,7,15–19]. The knowledge and characterization of these parameters are important to understand both the process involved in the preparation and the future behavior of such films. In this paper, CrSiN films were synthesized using medium frequency reactive magnetron sputtering. The main objective was to investigate the effect of incorporated Si on the structure modification and mechanical properties of CrN films. \section*{2. Experimental process} The CrN films with various Si contents were deposited on silicon p (1 1 1) wafer using medium frequency magnetron sputtering. The frequency of the power supply was fixed at 20 kHz at all the deposition process. The experimental equipment has been described in Ref. [20], so only is a brief description of experiment. given here. A pair of magnetron planar Cr (99.8 wt.% in purity) and Si (99.8 wt.% in purity) targets with size of 280 mm × 80 mm × 8 mm were set in cylindrical vacuum chamber wall. The sputtering chamber was evacuated to a pressure of $4.0 \times 10^{-3}$ Pa by a turbomolecular pump and then sputtering gas was introduced. The Si and mirror polished copper substrates were cleaned ultrasonically in acetone followed by de-ionized water. Then the Si substrates were glowing cleaned for 10 min at 1 Pa argon pressure at the substrate bias of −700 V. The film deposition process was carried out for 2 h at a 40 sccm Ar flow rate and a 160 sccm N$_2$ flow rate with the substrate bias at −100 V power at 1.1 kW (465–470 V × 2.4 A). To obtain the different Si/(Cr + Si) ratio of the CrSiN films, the specimens were placed at varying intervals between the Cr and Si targets. CrN films without Si were also deposited for the reference. Film crystallinity and phase structure were characterized using grazing incidence X-ray diffraction (GIXRD). A Philips X’perts X-ray diffractometer with Cu Kα radiation was employed to test the thin films. The scanning was performed from 20° to 90° at an incident angle 1°. The Si/(Cr + Si) ratio of the CrSiN films deposited on the copper wafer surface were determined by energy dispersive X-ray spectroscopy (EDS) analysis in a JSM-5600 LV scanning electron microscopy (SEM). X-ray photoelectron spectroscopy (XPS) analysis has been carried out on PerkinElmer PHI-5702 multifunctional photoelectron spectrometer with Al Kα radiation (1476.6 eV). The XPS spectra were collected in a constant analyzer energy mode, at a chamber pressure of $10^{-8}$ Pa and pass energy of 29.4 eV, with 0.125 eV/step. Fourier transformation infrared spectra of the films were recorded on a Bruker IFS66V Fourier transformation infrared spectrometer. By using transmission mode, the spectrum was collected for 500 scans at a resolution of 4 cm$^{-1}$. Field emission-SEM (Hitachi, S-4800) was utilized to observe the cross-sectional microstructure. The hardness of the films was determined by a nano-indenter (MTS Systems Corporation) using a Berkovich diamond tip and continuous stiffness option, with the maximum indentation depth within 100 nm (less than 10% of total film thickness to minimize the substrate contribution). Five replicate indentations were made for each film sample and the hardness was calculated from the load-unloading curves. Oliver and Pharr analysis method [21] was employed to calculate hardness values. ### 3. Results and discussions #### 3.1. Synthesis and characterization of CrSiN films The CrSiN films with 6 different Si/(Cr + Si) ratios from 8.4% to 47.0% have been successfully deposited, in addition to the reference CrN film. The relative compositions of the CrSiN films determined by EDS are shown in Table 1. Fig. 1 shows the XRD patterns of CrSiN films deposited on silicon wafer with various Si contents as well as the XRD pattern of CrN film for comparison. The XRD peaks in the CrN film are consistent to the diffraction peaks of cubic NaCl-type structure. For the CrSiN films up to 36.8 at.% Si, the XRD peaks are similar to that of the CrN film and can be well indexed using the cubic NaCl-type structure. It is interesting to note that there are no diffraction peaks corresponding to silicon-containing compounds (e.g. CrSi$_2$, Si and Si$_3$N$_4$), which indicates that the Si$_3$N$_4$ may exist in the form of amorphous. As the Si content in the CrSiN films increased, the diffraction peak intensities of CrN phase gradually reduced. The CrN (1 1 1) diffraction peaks shifted to lower angles as the Si contents increased up to 36.8 at.% Si/(Cr + Si) (Table 1). This could be due to that the added Si atoms were dissolved into CrN lattice [7]. Although the solubility limit of Si in the CrN phase was supposed to be very small, a certain amount of Si atoms could be dissolved into CrN crystal because the physical vapor deposition process was under non-equilibrium status. This might also increase the intrinsic stress of the films. The CrN (1 1 1) diffraction peaks shifted to lower angles as the Si contents increased up to 36.8 at.% Si/(Cr + Si) (Table 1). This could be due to that the added Si atoms were dissolved into CrN lattice [7]. Although the solubility limit of Si in the CrN phase was supposed to be very small, a certain amount of Si atoms could be dissolved into CrN crystal because the physical vapor deposition process was under non-equilibrium status. This might also increase the intrinsic stress of the films. In addition, the peak broadening phenomenon was also observed with the increase of Si content in CrN coatings. Such XRD peak broadening was believed to originate from the diminution of the grain size and/or the residual stress induced in the crystal lattice. The decrease of the crystallite size with increasing silicon content was attributed to the formation of stable nanostructure, composed of CrN crystal and amorphous Si$_3$N$_4$ matrix [6,10]. For the CrSiN film of 47.0 at.% Si, the XRD peaks disappear because the CrN crystals become too tiny to detect or the whole film become amorphous (Fig. 1). It seems that the existence of large content amorphous Si$_3$N$_4$ suppresses the formation of crystalline CrN, and thus the peaks of CrN crystals disappear. The presence of the Si$_3$N$_4$ phase has been confirmed by infrared spectroscopy. Fig. 2 shows the FTIR spectra of CrSiN films with various Si contents. The CrN film without Si has a broad absorption band located at 400–600 cm$^{-1}$ only, which is due to the vibration mode of the Cr−N bond [22]. In contrast, the FTIR spectra of the CrSiN films show not only a broad absorption band located at 400–600 cm$^{-1}$, but also a broad absorption band at 700–1100 cm$^{-1}$, which can be identified by different stretching vibration modes of the Si−N bonds. The deconvolution of the absorption band, given three Gaussian peaks centered at 800, 840 and 970 cm$^{-1}$ (Fig. 3), was attributed to various vibrational modes of a-Si$_3$N$_4$ [23]. The intensity of the Si–N bonds absorption peaks become much stronger compared to the intensity of the Cr–N bonds absorption peaks with the increase of the Si content in the films. These results further approved that the formation of large content a-Si$_3$N$_4$ matrix in the films with the increase of Si content. In order to further clarify bonding status of the amorphous phase in CrSiN films, XPS analyses were performed. The C 1s peak of adventitious hydrogen carbon (binding energy = 284.8 eV) was taken as the reference in calibration of the binding energy. Fig. 4a shows the Si 2p XPS spectra of the CrSiN film with various Si contents. The peak corresponding to 101.8 eV, which was in good agreement with that of the Si$_3$N$_4$ compound [7], was observed. The peak intensities increased with the increase of Si content. No other peaks corresponding to Si–Si bond (99.28 eV) or Cr–Si bond (99.56 eV) were observed. However, the Si$_3$N$_4$ peaks were not found from XRD analysis as showed in Fig. 1. It indicated that amorphous Si$_3$N$_4$ existed in the films. This is in good agreement with the reported results in Refs. [6,7,15] where Si$_3$N$_4$ compounds are in the form of amorphous in CrSiN or TiSiN films. On the other hand, the peak corresponding to 575.8 eV was found to be that of stoichiometric CrN [24], and this peak intensities decreased with the increase of Si contents (Fig. 4b). It is reasonable to propose that the CrSiN films have a nanocomposite structure consisting of nanocrystalline CrN in amorphous Si$_3$N$_4$ matrix. In order to investigate the influence of the doping Si on the microstructure of the CrSiN films, the cross-sectional profiles of the films were observed by FESEM. The cross-sectional micrographs of CrSiN films are shown in Fig. 5. The CrN film exhibited a continuous columnar structure, which consisted of columnar grains parallel to the growth direction in the order of about 100 nm (Fig. 5a). The CrSiN film with low Si/(Cr + Si) ratio (8.4%) (Fig. 5b) also exhibited columnar structure. However, it is clear that column propagation was interrupted by the further increase of Si and the granular structure can be observed in the CrSiN (Si/(Cr + Si) ratio of 25.3%) film (Fig. 5c). As the Si/(Cr + Si) ratio increased to 47.0%, neither columnar structure nor granular structure could be seen, and the coating exhibited an glassy-like appearance morphology. 3.2. Mechanical evaluation of CrSiN films The hardness of the CrSiN films was measured by nanoindentation test. The hardness of the CrN film produced using this deposition conditions described in this paper was about 13 GPa. Fig. 6 shows the curves for the hardness and Young’s modulus against Si/(Cr + Si) ratio of CrSiN films. As the Si content increased, the hardness of the CrSiN films increased from ~13 GPa for CrN to the first peak with the maximum value of approximately 25 GPa at the Si/(Cr + Si) ratio of 12.6%, and then decreased until 18 at.% [Si/(Cr + Si)], which was attributed to the increase of the fraction of amorphous SiNx phase into the coating. However, with the further increase of the Si content, the hardness increased again and second peak appeared at Si/(Cr + Si) ratio of 36.8%. The modulus values of the CrSiN films with the increase of the Si content also showed a similar profile with the hardness. The two hardness peaks suggest that different hardness mechanisms might be involved. The first hardness peak of CrSiN films with the Si/(Cr + Si) ratio about 10% must be related with the refinement of CrN crystallines. According to the generic design concept [6,10], this high hardness is based on the combination of the absence of dislocation activity in the small CrN nanocrystals and blocking of grain boundary sliding by the formation of a strong interface between the two phases. In addition, as seen in Table 1, The CrN (1 1 1) diffraction peaks shifted to lower angles monotonously with the increase of Si content. This illustrated that the films possessed relatively high stresses and the stress increased monotonously with the Si content [25]. The film with relative high stress may also attribute the high hardness. The 2nd peak may involve the existence of Si phase and amorphous CrN phase. At higher silicon content (Si/(Cr + Si) ratio of 47.0%), the amorphous films usually have disordered network with low density and could not sustain high loading during indentation, and thus the hardness was low. The hardness of the CrSiN films lower than the theoretical value resulted from impurities such as oxygen and carbon. Also the defects and voids, cracks around/or contain the crystal determined from this non-equilibrium PVD method, could propagate along the weak boundaries under indentation force, thus high hardness as the bulk material could not be obtained in the thin films. 4. Conclusions CrSiN films with various Si contents were deposited by medium frequency reactive magnetron sputtering. The CrN crystalline size decreases and its crystal structure changes from crystalline to amorphous phase as the Si content increases. No XRD peaks corresponding to Si$_3$N$_4$ or other silicide compounds were observed. The FTIR and XPS results suggest that Si$_3$N$_4$ phase exists as an amorphous phase in the films. Cross-sectional images showed that the film grewed up from continuous columnar structure, granular structure to glassy-like appearance morphology with the increase of silicon content. The two hardness peaks of the films were attributed to the nanocomposite and the relatively high stress. Acknowledgments The authors are grateful to the National Natural Science Foundation of China (No. 50772115 and No. 50823008) for financial support of this research work. References [1] C. Rehbohl, H. Ziegele, A. Leyland, A. Matthew, Surf. Coat. Technol. 115 (1999) 222. [2] G.A. Zhang, P.X. Yan, P. Wang, Y.M. Chen, J.Y. Zhang, Mater. Sci. Eng. A 460–461 (2007) 301. [3] Y.J. Zhang, P.X. Yan, Z.G. Wu, J.W. Xu, W.W. Zhang, X. Li, W.M. Liu, Q.J. Xue, J. Vac. Sci. Technol. A 22 (6) (2004) 2419–2423. [4] H.C. Batra, S.K. Jain, K.S. Rajam, Vacuum 72 (2004) 241–248. [5] G.A. Zhang, Z.G. Wu, M.X. Wang, X.Y. Fan, J. Wang, P.X. Yan, Appl. Surf. Sci. 253 (2007) 8835–8840. [6] S. Veprek, S. Reichrich, Thin Solid Films 268 (1995) 64. [7] J.H. Park, W.S. Chung, Y.-R. Cho, K.H. Kim, Surf. Coat. Technol. 188–189 (2004) 425. [8] G.-S. Kim, B.-S. Kim, S.-Y. Lee, Surf. Coat. Technol. 200 (2005) 1814–1818. [9] M. Diserens, J. Patscheider, F. Lévy, Surf. Coat. Technol. 108–109 (1998) 241. [10] S. Veprek, J. Vac. Sci. Technol. A 17 (1999) 2401. [11] E. Martinez, R. Sanjines, A. Karimi, J. Esteve, F. Levy, Surf. Coat. Technol. 180–181 (2004) 570–574. [12] D. Pilloud, J.F. Pierson, J. Takadoum, Thin Solid Films 496 (2006) 445–449. [13] Y.S. Dong, Y. Liu, J.W. Dai, G.Y. Li, Appl. Surf. Sci. 252 (2006) 5215–5219. [14] Q. Liu, Q.F. Fang, F.J. Liang, J.X. Wang, J.F. Yang, C. Li, Surf. Coat. Technol. 201 (2006) 1859–1863. [15] H.Y. Lee, W.S. Jung, J.G. Han, S.M. Seo, J.H. Kim, Y.H. Bae, Surf. Coat. Technol. 200 (2005) 1026. [16] J.W. Kim, K.H. Kim, D.B. Lee, J.J. Moore, Surf. Coat. Technol. 200 (2006) 6702. [17] K. Yamamoto, T. Sato, M. Takeda, Surf. Coat. Technol. 193 (2005) 167. [18] D. Mercs, N. Bonasso, S. Naamane, J.-M. Bordes, C. Coddet, Surf. Coat. Technol. 200 (2005) 403. [19] S.Y. Lee, B.S. Kim, S.D. Kim, G.S. Kim, Y.S. Hong, Thin Solid Films 506–507 (2006) 192. [20] P. Wang, X. Wang, T. Xu, W. Liu, J. Zhang, Thin Solid Films 515 (2007) 6899. [21] W.C. Oliver, G.M. Pharr, J. Mater. Res. 7 (1992) 1564. [22] O. Banakh, P.E. Schmid, R. Sanjines, F. Levy, Surf. Coat. Technol. 163–164 (2003) 57. [23] Y.C. Liu, K. Furukawa, D.W. Gao, H. Nakashima, K. Uchino, K. Muraoka, Appl. Surf. Sci. 121–122 (1997) 233. [24] Q.G. Zhou, X.D. Bai, X.W. Chen, D.Q. Peng, Y.H. Ling, D.R. Wang, Appl. Surf. Sci. 211 (2003) 293. [25] D. Mercs, P. Briois, V. Demange, S. Lamy, C. Coddet, Surf. Coat. Technol. 201 (2007) 6970.
1 Introduction and Summary Recently Zuhair Abdul Ghafoor Al-Johar [12] has directed our attention to a syntactic constraint that is—on the face of it—tighter than NF’s device of stratification\(^1\); in this little essay I consider a weakening, namely the generalisation of stratification to stratification mod \(n\). So far the coterie of NFistes has considered neither the possibility that the class of unstratified formulae in the language of set theory might admit any structure or gradation, nor the possibility that failure-of-stratification (which perhaps we can call dysstratification) might come in degrees, nor the possibility that recognition of such degrees might allow one to gain understanding and prove useful facts. So stratification-mod-\(n\) opens a new vein, but not one i’ve been able to get anything really substantial out of. Not so far, anyway . . . mostly just simple-minded generalisations of the standard stratified case—not that those are without merit, since \(^1\)Tho’ recent work of Nathan Bowler seems to establish that every stratifiable formula is equivalent (modulo some very minor set-theoretic assumptions) to an acyclic formula. they prepare the ground for subsequent work. It has to be admitted that stratification-mod-$n$ comes across as a highly artificial notion, of interest only to those whose critical faculties have been weakened by prior exposure to the idea of stratification. However there is a nontrivial result that makes essential use of this notion, and we will see it in section 6 where I show (theorem 1) that—for NF—duality for formulae that are stratifiable-mod-2 is consistent relative to AC$_2$. Although I do not believe that this result is best possible it is nevertheless worth mentioning because it is a significant improvement on what has so far been known about duality. I still believe that duality for all formulae is consistent relative to NF—and that we do not need AC$_2$. If we achieve that, stratification-mod-$n$ can perhaps go back to the shades whence it came. But perhaps by then the idea will have thrown useful light on other ideas: we shall see. ## 2 Stratification Even readers who are familiar with the idea of stratification should probably read this section, since the treatment here is slightly more abstract than the usual one, and is tailored to the developments that follow. Let $\mathcal{L} = \mathcal{L}(\in, =)$ be the language of set theory. We associate to every formula $\phi \in \mathcal{L}$ a digraph as follows. First we identify two variables ‘$v$’ and ‘$v'$’ if $\phi$ contains either of the atomic subformulae ‘$v = v'$’ or ‘$v' = v$’, and so on, recursively. The vertices of the digraph are the equivalence classes of variables in $\phi$, and we place a directed edge from one vertex $v$ to another vertex $v'$ if the atomic formula ‘$v \in v'$’ is a subformula of $\phi$. We call this graph the *derived graph* of $\phi$, and write it $G_\phi$. Our digraphs are allowed to have loops at vertices, and may have multiple edges in the restricted sense that there could be a directed edge from $v$ to $v'$ as well as a directed edge from $u'$ to $v$—but only one in each direction. In a digraph we can have a special notion of a path from $v_1$ to $v_2$ which allows us to “go the wrong way”. The **length** of such a path is computed by adding 1 every time you follow an arrow the right way, and subtracting 1 every time you go the wrong way. For $n \leq \aleph_0$ the $n$-gon $G_n$ is the unique connected digraph with precisely $n$ vertices where every vertex has indegree 1 and outdegree 1. It is a reduct of the integers mod $n$, in that it has successor-mod-$n$ but does not have addition or multiplication. If we are to sensibly describe the circular stratification that is of interest to us here then it is the $n$-gon $G_n$ that we need rather than $\mathbb{Z}/n\mathbb{Z}$, because the additive and multiplicative structures of $\mathbb{Z}/n\mathbb{Z}$ do nothing for us when computing stratifications; they are merely distractions. Unlike the integers-mod-$n$ the $n$-gon $G_n$ is not rigid: its automorphism group is the cyclic group $C_n$. This matters because the set of stratifications-mod-$n$ of a formula $\phi$ are “closed under rotation” so that if there is one there are $n$. There is a slight problem when $n = 2$, since digraphs cannot normally have multiple edges, but we will tough this one out. And I still entertain hopes that the $\aleph_0$-gon will turn out to have a name already. For the moment let’s call it the $\mathbb{Z}$-gon. The theory of $n$-gons is Horn so the class of $n$-gons is closed under products and homomorphisms. In particular there is a homomorphism $G_m \rightarrow G_n$ whenever $n$ divides $m$, and we will exploit this fact, for example in the proof of remark 1. **Definition 1** A **stratification graph** is one where $$\forall v_1 \forall v_2 (\text{all paths from } v_1 \text{ to } v_2 \text{ are the same length}).$$ A **stratification-mod-$n$ graph** is one with a homomorphism onto the $n$-gon. If we don’t want to mention the ‘$n$’ we will say that a graph that is stratified-mod-$n$ is **circularly stratified**. A formula is **(Crabbé)-elementary** iff all its variables are related by the ancestral of the relation “$v$ and $v'$ occur in an atomic subformula together”. We will tacitly assume in what follows that all our formulae are Crabbé-elementary. Classically (though not constructively) every first-order formula is equivalent to a boolean combination of elementary formulae (and every *closed* first-order formula is equivalent to a boolean combination of *closed* elementary formulae) so there is little cost in making this simplifying assumption. Without it, some of the proofs below would become snarled up in annoying minor details, so I plead for the reader’s indulgence. **Definition 2** A formula is **stratifiable** iff its derived digraph is a stratification graph. A **stratification** of a formula $\phi$ is a homomorphism from the derived graph $G_\phi$ of $\phi$ to the $\mathbb{Z}$-gon; A **stratification-mod-$n$** of a formula $\phi$ is a homomorphism from the derived graph $G_\phi$ of $\phi$ onto the $n$-gon. A formula is **stratifiable mod $n$** iff its derived digraph is a stratification-mod-$n$ graph. Again, if we do not want to mention the ‘$n$’ we will say of a formula that is stratifiable-mod-$n$ that it is **circularly stratifiable**. Equivalently a stratification graph is one where, for all vertices $v$, all paths from $v$ to $v$ are of length 0; a stratification-mod-$n$ graph is one where, for all vertices $v$ and $v'$, all paths from $v$ to $v'$ are of the same length mod $n$, or—equivalently—for all vertices $v$, all paths from $v$ to $v$ are of length 0 mod $n$. **Remark 1** (i) A formula that can be stratified both mod-$n$ and mod-$m$ can be stratified mod-$\text{LCM}(m,n)$, and conversely. (ii) A formula that is stratifiable-mod-$n$ for arbitrarily large $n$ is just plain stratifiable, and a stratifiable formula is stratifiable-mod-$n$ for all $n$. Proof: (i) Let $\phi$ be such a formula, and $G_\phi$ its derived graph. $\phi$ is both stratifiable-mod-$n$ and stratifiable-mod-$m$ which is to say that there are homomorphisms $f : G_\phi \rightarrow G_n$ and $g : G_\phi \rightarrow G_m$. Consider now the graph $G = \{ \langle f(v), g(v) \rangle : v \in G_\phi \}$ with the obvious edge relation. We want to show that $G$ is the LCM($m, n$)-gon. It is a graph of size at most $n \cdot m$. There is a homomorphism $\lambda v . \langle f(v), g(v) \rangle : G_\phi \rightarrow G$. Clearly every vertex in $G$ has indegree 1 and outdegree 1, so it is either a gon (if it is connected) or a union of gons (o/w). It is also clear that if we apply the edge operation of the graph $G$ $n$ times to an ordered pair we reach an ordered pair with the same first component, and if we apply the edge operation $m$ times to an ordered pair we reach an ordered pair with the same second component, so if we apply the edge operation LCM($m, n$) times to an ordered pair we get back to that same ordered pair. And LCM($m, n$) is the smallest number of times we can apply the edge operation of $G$ to secure this effect. Therefore one of the connected components of $G$ is the LCM($m, n$)-gon, so $G$ is the LCM($m, n$)-gon as long as it is connected. To establish that it is, indeed, connected we show that, for all vertices $v, v'$ in $G$, there is a path from $\langle f(v), g(v) \rangle$ to $\langle f(v'), g(v') \rangle$. Recall that $G_\phi$ is a stratification graph, so there is a well-defined distance, $d$, from $v$ to $v'$. We can now see that the distance from $\langle f(v), g(v) \rangle$ to $\langle f(v'), g(v') \rangle$ is precisely $d$, so $G$ is connected. For the converse, if $\phi$ is stratifiable-mod-LCM($m, n$) then there is a homomorphism $f : G_\phi \rightarrow G_{LCM(m, n)}$. We compose $f$ with the homomorphism from $G_{LCM(m, n)}$ onto $G_n$, thereby showing that $\phi$ is stratifiable-mod-$n$; similarly $\phi$ is also stratifiable-mod-$m$. (ii) If $n > \text{length}(\phi)$, then any stratification-mod-$n$ of $\phi$ is (or, more correctly, can be easily modified into) a stratification. For the other direction, observe that, for every $n$, the $\mathbb{Z}$-gon maps onto the $n$-gon $G_n$. So the picture is: we only have to worry about stratifiability-mod-$p$ for $p$ prime, and the various stratifiabilities-mod-$p$ are the weakest conditions; stratifiability-mod-$mn$ is stronger than stratifiability-mod-$n$, and all these are weaker than stratifiability tout court, which is their conjunction. The various stratifiabilities-mod-$p$ with $p$ prime all seem to be equally weak, and they are all of minimal strength. It may be worth noting that we cannot strengthen remark 1 by modifying the assumption on the formula to being merely equivalent both to a formula that is stratifiable-mod-$n$ and to a formula that is stratifiable-mod-$m$, because of the axiom of counting. For every $n$, the axiom of counting is equivalent (modulo NF) to a formula that is stratifiable mod $n^2$, so the analogue of remark 1 (ii) would tell us that it is equivalent to a stratifiable formula. However it is known that it is not equivalent (modulo NF) to any stratifiable formula.\footnote{We will see a proof of this on p 9.} 3 Preservation Results for Stratification-mod-$n$ We start with a definition from [4]. **Definition 3** $H(0, \tau) =: 1_V$; $H(n + 1, \tau) =: (j^n\tau) \cdot H(n, \tau)$. This $H$ notation will only ever be used with concrete naturals in first argument place.\footnote{so we shouldn’t use these purely concrete chaps as arguments; they should be hidden in the syntax? The trouble with this policy is that we don’t want footnotesized things like ‘$LCM(n,m)$’.} The effect of this notation is that, for any $\tau$ and any concrete $n$, $(\forall xy)(x \in \tau(y) \longleftrightarrow H(n, \tau)(x) \in H(n + 1, \tau)(y))$. The intention behind the design of this family of permutations derived from a single $\tau$ is to prove that, when $\phi$ is stratifiable, $\phi^\tau$ is equivalent to the result of replacing every occurrence of each free variable ‘$v$’ with ‘$H(n_v, \tau)(u)$’ where $n_v$ is the concrete natural number associated to the variable ‘$v$’ in a fixed stratification of $\phi$. In the treatment here, our stratifications are functions from $vbls(\phi)$ to the $\mathbb{Z}$-gon or the $n$-gon and do not take numbers as values. This can be remedied by composing a stratification with a decoration-by-numbers (satisfying the obvious adjacency condition) of the gon in question. It might be worth minuting other facts about the family of permutations engendered in this way from a permutation $\sigma$. For example $H(n + m, \sigma) = j^m(H(n, \sigma)) \cdot H(m, \sigma)$. I don’t think there is a nice formula for $H(n \cdot m, \sigma)$. This is another manifestation of the fact that there is no natural arithmétique structure on the set of type indices. We have a theorem of Scott that stratifiable formulae are preserved under the Rieger-Bernays permutation construction. This is an assertion of the form $$(\forall \pi)(F(\pi) \rightarrow (\forall \phi)(\phi \in \Gamma \rightarrow (\phi^\pi \longleftrightarrow \phi)))$$ or equivalently $$(\forall \phi)(\phi \in \Gamma \rightarrow (\forall \pi)(F(\pi) \rightarrow (\phi^\pi \longleftrightarrow \phi)))$$ Assertions like (A) have converses of the form $$(\forall \pi)[(\forall \phi)(\phi \in \Gamma \rightarrow (\phi^\pi \longleftrightarrow \phi)) \rightarrow F(\pi)]$$ and $$(\forall \phi)[(\forall \pi)(F(\pi) \rightarrow (\phi^\pi \longleftrightarrow \phi)) \rightarrow \phi \in \Gamma]$$ In this section we consider the project of proving assertions like these where $\Gamma$ is the set of formulæ that are stratifiable-mod-$n$. This will involve us in identifying interesting properties of permutations to serve as the ‘$F$’ in the statement of the results. 3.1 Instances of (A): \((\forall \pi)(F(\pi) \rightarrow (\forall \phi)(\phi \in \Gamma \rightarrow (\phi^\pi \longleftrightarrow \phi)))\) **Proposition 1** If \(\phi\) is stratifiable-mod-\(n\) then it is preserved under all Rieger-Bernays constructions using setlike permutations \(\pi\) s.t. \(H(n, \pi) = \mathbf{1}\). *Proof:* The proof is a straightforward adaptation of the proof given by Henson. In Henson’s treatment of the stratified case we fix a stratification \(s\) for \(\phi\). [In that treatment stratifications take values in \(\mathbb{Z}\), not in the \(\mathbb{Z}\)-gon.] Then, whenever we look at a subformula ‘\(x \in \sigma(y)\)’ in \(\phi^\sigma\) we replace it by ‘\(H(n, \sigma)(x) \in H(n+1, \sigma)(y)\)’ where \(n\) is the type given to ‘\(x\)’ by the stratification \(s\). We then observe that, for every variable, all occurrences of that variable in the rewritten version of \(\phi^\sigma\) are prefixed by a ‘\(H(n, \sigma)\)’ where \(n\) is the type given to ‘\(x\)’ by the stratification \(s\). Then we appeal to the fact that \(H(n, \sigma)\) is a permutation, so we can reletter ‘\(H(n, \sigma)(x)\)’ as ‘\(x\)’, and this manipulation turns \(\phi^\sigma\) back into \(\phi\). The difference here, in this case, is that our subscripts are no longer integers but are integers-mod-\(n\), so that if \(i \equiv j \pmod{n}\) we must have \(H(i, \sigma) = H(j, \sigma)\). This is equivalent to requiring that \(H(n, \sigma)\) be the identity. ■ 3.2 Instances of (C): \((\forall \phi)[(\forall \pi)(F(\pi) \rightarrow (\phi^\pi \longleftrightarrow \phi)) \rightarrow \phi \in \Gamma]\) There is a theorem, proved by Pétry and the author ([6], [10], [11]) to the effect that: if a formula is preserved under all Rieger-Bernays constructions using setlike permutations then it is equivalent to a stratified formula. Is there an analogous result to the effect that if a formula is preserved under all Rieger-Bernays constructions using setlike permutations \(\sigma = H(n, \sigma)\) then it is equivalent to a formula that is stratifiable-mod-\(n\)? Something like that ought to be true, and it’s probably worth proving. 3.3 Instances of (B): \((\forall \pi)[(\forall \phi)(\phi \in \Gamma \rightarrow (\phi^\pi \longleftrightarrow \phi)) \rightarrow F(\pi)]\) We start with a very easy example: **Remark 2** If \(f : V \rightarrow V\) (possibly a proper class) satisfying \(\phi \longleftrightarrow \phi^f\) for all stratified expressions then \(f\) must be a setlike permutation. *Proof:* The axiom of extensionality is stratified, and any \(f\) that preserves it must be onto. If \(f\) preserves an \((n+1)\)-stratified formula then \(H(n, f)\) has to be defined, so \(f\) has to be \(n\)-setlike. ■ One might expect that if \(\pi\) is a permutation that preserves all formulae that are stratifiable-mod-\(n\) then \(H(n, \pi) = \mathbf{1}\). Something with that sort of flavour should be true. The following is a straw in the wind. **Remark 3** If \(H(n, \sigma) = \mathbf{1}\) and \(H(k, \sigma) = \mathbf{1}\) then \(H(HCF(n, k), \sigma) = \mathbf{1}\). Proof: This is because, for every $\sigma$, the class of naturals $n$ s.t. $H(n, \sigma) = 1$ is closed under subtraction\footnote{And it is \textit{prima facie} a class not a set, since it is defined by an unstratified expression} so we can, as it were, perform Euclid’s algorithm. If $H(n, \sigma) = 1$ and $H(k, \sigma) = 1$, with $n > k$ then reflect that $H(n, \sigma)$ is $(j^k H(n - k, \sigma)) \cdot H(k, \sigma)$. So $j^k H(n - k, \sigma) = H(n, \sigma) \cdot H(k, \sigma)^{-1} = 1 \cdot 1 = 1$. But then $H(n - k, \sigma) = 1$ as well. This doesn’t actually say that if $\sigma$ both preserves formulae that are stratifiable-mod-$n$ and preserves preserves formulae that are stratifiable-mod-$k$) then it preserves formulae that are stratifiable-mod-$HCF(n, k)$, but it has that flavour. One wants to say that a permutation that preserves all closed formulae must be an $\in$-automorphism, but that doesn’t seem to be strictly true. At any rate i don’t know how to prove it! Perhaps we can prove it by reasoning about Ehrenfeucht games. What i do know how to prove is that, if $V \simeq V^\sigma$, then $\sigma$ is skew-conjugate to the identity. The only permutation that preserves all expressions (i.e., including open formulae) is 1. And, once we have identified predicates $F$ that appear in theorems of flavour (B), one wants to find a structure for the set of all permutations on $V$ such that, for each $F$, the class of permutations that are $F$ is a substructure not a mere subclass. One thing one might have hoped to prove is that if $\phi$ is stratifiable-mod-$n$ and is logically equivalent to a formula that is stratifiable-mod-$m$ then it is logically equivalent to a formula that is stratifiable-mod-$nm$, but his possibility is denied us by the axiom of counting, as noted above (p 2). Definitely work to be done in section 3! 4 Cylindrical Types We should note that stratification-mod-$n$ is not a useful notion from the point of view of comprehension principles, since there are paradoxical objects that are the extension of formulae that are stratifiable-mod-$n$; one thinks of the $n$-fold Russell class $\{x : x \not\in^n x\}$—being the extension of the formula ‘$x \not\in^n x$’ (which is stratifiable-mod-$n$) which is a paradoxical object even in mere first-order logic. This is discussed in section 4 of [3]. So that’s a dead end, but there is an obvious link from formulae that are stratifiable-mod-$n$ to the theory TZT+ Amb$^n$. The usual Specker equiconsistency analysis leads one thence to type theories whose levels are indexed by the $n$-gon. One could perhaps call these theories “type theory mod $n$”, and that is what i shall do here; the proper name will be “TC$_n$T” (“theory of $n$ cylindrical types”). Let’s be formal about it. **Definition 4** The language $L(TC_n T)$, where $n$ is a concrete natural number, has two binary relation symbols: ‘=’ and ‘∈’. Its variables each have a type index as an integral part, and those type indices are precisely the elements of the $n$-gon. The axioms of $TC_n T$ are extensionality at each type, as with $TZT$, but there is a subtlety with the set comprehension axioms. One cannot allow $(\exists x)(\forall y)(y \in x \longleftrightarrow y \not\in^n y)$ to be an axiom (for obvious reasons) even tho’ this formula is a wff of $L(TC_n T)$ and has the syntactic form of a comprehension axiom. One allows set comprehension only for the old $TZT$ axioms. To be formal about it, a wff that looks like a comprehension axiom is adopted as an axiom only if it is possible to rejig the type indices in it so that the resulting formula is an axiom of $TZT$. Thus the axioms of $TC_n T$ are “closed under rotation”, or *ambiguous* in traditional parlance. The fact that the existence of $\{x : x \not\in^n x\}$ is not a comprehension axiom does not mean that $\{x : x \not\in^n x\}$ cannot exist at any of the $n$ levels; it might exist at some. However it cannot exist at all of them, and that’s why we cannot have $(\exists x)(\forall y)(y \in x \longleftrightarrow y \not\in^n y)$ as an axiom (scheme). Now that we know that NF is consistent (see Holmes [9], in preparation) we also know that $TC_2 T$ is consistent: any model $M$ of NF straightforwardly gives rise to a model $M^{(n)}$ of $TC_n T$ (for any concrete $n$ we please) and all such models are typically ambiguous. Altho’ no model of $TC_2 T$ can contain the double Russell class $\{x : (\forall y)(x \in y \rightarrow y \not\in x)\}$ at both levels, we don’t know whether or not there can be a model of $TC_2 T$ that contains this object at one of its two levels . . . and there are of course more complicated analogues of this question for larger values of 2. It’s an old result (it was in my Ph.D. thesis, with a much improved proof by Crabbé [1] subsequently) that $TZT + \text{Amb}^n$ refutes AC, and by essentially the same mechanism as does $TZT + \text{Amb}$. ## 5 Modulo-$n$ analogues of strongly cantorian In this section we consider the property “$\iota^n \mid x$ exists” which is stratifiable-mod-$n$. It’s an analogue of *strongly cantorian*. Lots of things to be said about it. Is this generalisation of strong cantorian-ness a good notion of small set? In the categorial sense, that is? I noticed years ago the fact that altho’ the existence of $\iota \mid x$ clearly implies the existence of $\iota^n \mid x$, the converse does not seem to hold. If $\iota^2 \mid x$ exists then certainly $x \sqcup \iota \sim x$ is cantorian but that (and its analogues for $n > 2$) seem to be as far as one can go. It would appear that, in principle, there might be sets $x$ s.t. $\iota^n \mid x$ exists for some $n$ but which are nevertheless not strongly cantorian. [I’m guessing that the assertion that such sets exist is invariant; it might be an idea to write out a proof]. The property “$\iota^n \mid x$ exists” is inherited by subsets in the same way that strong-cantorianness is, so it it an analogue of ‘strong cantorian’ rather than a mere weakening of it, like ‘cantorian’. The possible existence of such sets is worth noting in the present context, since for them one can prove an analogue of subversion of stratification for formulae that are stratifiable-mod-$n$. Subversion of stratification says that, if $M$ is a strongly cantorian set, and $\phi$ an arbitrary formula, then $\{x \in M : \phi^M(x)\}$ exists. ($\phi^M$ is the result of restricting all quantifiers in $\phi$ to $M$.) The analogue here would say that, if $\iota^n \upharpoonright M$ exists and $\phi$ is stratifiable-mod-$n$, then $\{x \in M : \phi^M(x)\}$ exists. Just as subversion for strongly cantorian sets gives us interpretations into (extensions of) NF of fully unstratified set theories, subversion for sets $x$ for which $\iota^n \upharpoonright x$ exists will give us interpretations into (extensions of) NF of set theories satisfying syntactic contraints correspondingly less onerous than full stratification. Does this open up a vein of novel, more delicate, relative consistency proofs? Possibly, but not if we are adopting an axiom of infinity: the assumption that there is an (infinite) $x$ s.t. $\iota^n \upharpoonright x$ exists is as strong as the assumption that there is an infinite strongly cantorian set. This triviality is worth minuting because we will make use of it elsewhere (see p. 4). **Remark 4** (i) If $x$ is a wellorderable set s.t. $\iota^n \upharpoonright x$ exists then $x$ is strongly cantorian. (ii) If there is an infinite $x$ and a concrete $n$ such that $\iota^n \upharpoonright x$ exists then the axiom of counting holds. **Proof:** (i) If $x$ is a wellorderable set s.t. $\iota^n \upharpoonright x$ exists then the order type of any worder of $x$ is certainly going to be less than $\Omega$, $\Omega_1 \ldots$,\footnote{$\Omega$ is the order type of the set of ordinals; $\Omega_1 = T\Omega$, and so on.}, so we can assume without loss of generality that $x$ is an initial segment $X$ of the ordinals. This means that $\iota^n \upharpoonright X$ exists, and that in turn means that $T^n \upharpoonright X$ exists, and that in turn means that we can prove by induction on the ordinals that $T^n \upharpoonright X$ is the identity. So, for every $\alpha \in X$, $T^n\alpha = \alpha$. For every ordinal $\alpha$ (and so in particular for every $\alpha \in X$) we have $\alpha = T\alpha \lor \alpha < T\alpha \lor \alpha > T\alpha$. The second disjunct implies (apply $T$ to both sides) $T\alpha <^2 \alpha$ giving $\alpha < T\alpha < \ldots T^n\alpha$ contradicting $T^n\alpha = \alpha$; the third disjunct is refuted similarly. So $T \upharpoonright X$ exists beco’s it is the identity, so $\iota \upharpoonright X$ exists as well. (ii) This property “$\iota^n \upharpoonright x$ exists” is preserved by power set as well as by subset, so if there is even one infinite set which has it then $\mathbb{N}$ will have it as well. (Just as: $\mathbb{N}$ is strongly cantorian if there is even one infinite strongly cantorian set). But $\mathbb{N}$ is wellordered, so we can apply part (i). \hfill \Box The other direction (inferring “$\iota^n \upharpoonright \mathbb{N}$ exists” for any concrete $n$ from the axiom of counting) is easy. Thus, for every (concrete) $n$, the axiom of counting is equivalent modulo NF to a formula that is stratifiable-mod-$n$. However if the axiom of infinity is not assumed we do get some play. Let Mac$_n$ be Mac with separation restricted to formulae that are $\Delta_0$ and stratifiable-mod-$n$. Analogues of the result in [8] to the effect that Mac + TcI can be interpreted can be obtained, saying that Mac$_n$ + TC1 can be interpreted into KF, but these results are weaker than the result in [8]. However these refined constructions could turn out to be useful should there turn out to be theories of the form Mac$_n \cup \{A\}$ (where $A$ is some formula not a theorem of Mac . . .) but no such examples leap to mind. Not to the author’s mind anyway: $\exists$NO might have sounded like a starter but is is inconsistent with the existence of $\iota^n \mid x$ for all $x$. (This last follows from remark 4 part (i).) It might be worth writing out a proof that if $x$ is a wellorderable set s.t. $\iota^n \mid x$ exists then stcan$(x)$. It goes as follows... The upshot of this is that $\exists$NO is incompatible with Mac$_n$, the point being that $\iota^n \mid$the representative set of wellorderings would exist and that the quotient would be strongly cantorian. Reflect that if $\iota^n \mid x$ exists then $\iota^{n-k} \mid x$ exists for all concrete $k$, for the following reason. RUSC$(R)$ always exists, so RUSC$^k(R)$ exists for all $R$ and all concrete $k$, so RUSC$^k(\iota^n \mid x)$ exists and so $\iota^n \mid x$ composed with RUSC$^n(\iota^n \mid x)$ exists, and that is $\iota^{n-2} \mid x$. And so on for all the other multiples of $n$. 5.0.1 Finitising the restriction of the scheme of $\Delta_0$ separation to formulae that are stratifiable-mod-$n$ We know how to finitely axiomatise stratified $\Delta_0$ separation, and we can get full $\Delta_0$ separation from that axiomatisation simply by adding the existence of $\iota \mid x$ for all $x$. It seems fairly clear that the way to modify the collection of rudimentary functions to obtain separation for $\Delta_0$ formulae that are stratifiable-mod-$n$ is to replace the function giving $\{\langle \iota(x), y \rangle : x \in y \in A\}$ by the function giving $\{\langle \iota^{n+1}(x), y \rangle : x \in y \in A\}$. It seems clear, but it might be an idea to write out the details; all it would involve is a simple modification of the proof in the second edition of the monograph [5]. 6 Applications to Duality The special case of stratification-mod-$n$ which will concern us here is $n = 2$. The context throughout this section is NF. **Definition 5** The dual $\hat{\phi}$ of a formula $\phi$ is the formula obtained from $\phi$ by replacing all occurrences of ‘$\in$’ in $\phi$ by ‘$\notin$’. It is known that $\phi \longleftrightarrow \hat{\phi}$ is a theorem of NF whenever $\phi$ is a closed stratified formula. Permutation models can be found in which $\phi \longleftrightarrow \hat{\phi}$ fails for some unstratified $\phi$, but it remains an open question whether or not there are models in which $\phi \longleftrightarrow \hat{\phi}$ holds for all $\phi$. It turns out that if we have AC$_2$ then we can prove the relative consistency of the scheme $\phi \longleftrightarrow \hat{\phi}$ for all $\phi$ that are stratifiable-mod-2. This will be theorem 1 below, and it is the principal aim of this section to prove it. We consider the sequence of permutations: $1$, $c$, $jc \cdot c$, $j^2c \cdot jc \cdot c$, where $c$ is the complementation permutation. The subscripts are all small (are all numerals, in fact), so we will be using the (original) notation of Henson, in which these permutations are written ‘$c_i$’, thus: $c_1 := c$; $c_{i+1} := j(c_i) \cdot c$ (rather than the $H(c,i)$ notation used above). We will need some lemmas: **Lemma 1** AC$_2$ implies that, for all permutations $\tau$, $j\tau \cdot c$ has fixed points iff $\tau$ has no odd cycles. *Proof:* R $\rightarrow$ L Suppose $X$ is a fixed point for $j\tau \cdot c$. Then, for each $\tau$-cycle $C$, we must have $\tau^{-1}(X \cap C) = C \setminus X$ and that means that $|C|$ must be even (or infinite). This direction does not need AC$_2$. L $\rightarrow$ R This direction needs AC$_2$. Suppose $\tau$ has no odd cycles. Each $\tau$-cycle splits into precisely two $\tau^2$ cycles. Use AC$_2$ to pick, for each $\tau$-cycle one of the two $\tau^2$-cycles into which it splits. The union of the set of chosen $\tau^2$-cycles is a fixed point for $j\tau \cdot c$. **Lemma 2** (i) All the $c_i$ are involutions; (ii) All the $c_i$ commute with each other; (iii) Assuming AC$_2$ the $c_{2i}$ have fixed points and the $c_{2i+1}$ have none. *Proof:* We start by noting a key triviality: $c$ commutes with $j\tau$ for all $\tau$. (i) We prove this by induction on $i$. Suppose $c_i$ is an involution. $c_{i+1} = jc_i \cdot c$. So $(c_{i+1})^2 = (jc_i \cdot c)^2 = jc_i \cdot c \cdot jc_i \cdot c$. Now by the key triviality we can rearrange to $jc_i \cdot jc_i \cdot c \cdot c = 1$. In fact this even shows that all products of the $c_i$ are involutions. (ii) The key triviality implies that $jc$ commutes with $j^{n+1}c$ for all $n$, and so on by induction on ‘$n$’. This means that the various permutations that we multiply together to obtain $c_i$ can be multiplied together *in any order* and we still get $c_i$. (iii) We prove by induction using lemma 1. Some of the cases of (iii) we can establish without any use of AC$_2$. Clearly $c_1 = c$ has no fixed points. Also $c_2$ does have fixed points, since any ultrafilter on $V$ is a fixed point, and—although we need choice to create nonprincipal ultrafilters—there are always principal ultrafilters around. $c_3$ now cannot have fixed points, because the proof that if $\tau$ has fixed points then $j\tau \cdot c$ has none (this was the first part, the L → R direction, of lemma 1) does not need AC$_2$. We will need the concept of a transversal for a disjoint family; it is a set that meets every member of the family on a singleton. We will make much use of the fact that an involution without fixed points can be thought of as a partition of $V$ into pairs. **Lemma 3** Any two involutions-without-fixed-points whose corresponding partitions-of-$V$-into-pairs have transversals are conjugate. *Proof:* We do not need AC$_2$ for this. First we establish that if $P$ is a transversal for a partition $\Pi$ of $V$ into pairs then its cardinality is $|V|$. Clearly $|\Pi| = T|P|$, since we can send each piece of $\Pi$ to the unique singleton $\subset P$ that meets it. Observe that there is a bijection between $^iV$ and $\Pi \times \{0, 1\}$, as follows. For each $x$ there is a unique $p_x \in \Pi$ with $x \in p_x$. If $x \in P$ we send $\{x\}$ to $\langle p_x, 0 \rangle$; if $x \notin P$ we send $\{x\}$ to $\langle p_x, 1 \rangle$. Finally if $\pi_1$ and $\pi_2$ are two involutions-without-fixed-points then not only are their transversals both of size $|V|$ but the two involutions are conjugate, as follows. Let the transversals be $P_1$ and $P_2$. These two transversals are in 1-1 correspondence, by a map $\pi^*$, say. Any such $\pi^*$ can be extended to a permutation $\pi$ of the universe by adding all the ordered pairs $\langle \pi_1(x), \pi_2(\pi^*(x)) \rangle$ for $x \in P_1$. This proof tells us nothing about the cycle type of permutations that conjugate $\pi_1$ and $\pi_2$. Fortunately we do not need any such information in what follows. **Lemma 4** (AC$_2$) $c$ and $c_3$ are conjugate. *Proof:* We established in the rider to part (iii) of lemma 2 that both $c$ and $c_3$ are involutions without fixed points, so they can be thought of as partitions of $V$ into pairs. The partition-of-$V$-into-pairs that corresponds to the permutation $c$ has a definable transversal: simply pick from each pair $\{x, V \setminus x\}$ that element that contains the empty set. There does not appear to be a definable transversal for the partition-of-$V$-into-pairs that corresponds to the permutation $c_3$, but AC$_2$ will provide one. But now we can apply lemma 3 to conclude that $c$ and $c_3$ are conjugate. As remarked above there is a definable transversal for the partition-into-pairs corresponding to the permutation $c$. However I see no way of producing a definable transversal for the set of pairs corresponding to $c_3$, and we do seem to need AC$_2$ at this point. However it is clear that AC$_2$ is not needed anywhere else, so it may be a worthwhile exercise to see if a definable transversal can be found for [the partition-of-$V$-into-pairs corresponding to] $c_3$, and thereby eliminate all use of AC. **Lemma 5** If AC$_2$ then there is a permutation model containing two permutations $\sigma$ and $\tau$ satisfying \[ (\forall xy)(x \in y \longleftrightarrow \sigma(x) \not\in \tau(y)) \quad \text{and} \quad (\forall xy)(x \in y \longleftrightarrow \tau(x) \not\in \sigma(y)). \] which is to say: $\sigma = j\tau \cdot c$ and $\tau = j\sigma \cdot c$. *Proof:* Consider what happens in the model $V^\pi$, where $\pi$ is the permutation whose existence is promised in lemma 4. $\pi$ conjugates $c$ to $c_3$, which is to say \[ \pi \cdot c \cdot \pi^{-1} = j^2 c \cdot jc \cdot c \] Lift by $j$: \[ j\pi \cdot jc \cdot j\pi^{-1} = j^3 c \cdot j^2 c \cdot jc \] compose both sides with $c$ on the right: \[ j\pi \cdot jc \cdot j\pi^{-1} \cdot c = j^3 c \cdot j^2 c \cdot jc \cdot c \] But $c$ commutes with $j\pi^{-1}$ giving \[ j\pi \cdot jc \cdot c \cdot j\pi^{-1} = j^3 c \cdot j^2 c \cdot jc \cdot c \] which says that $j\pi$ conjugates $c_2$ with $c_4$. We now, in $V^\pi$, have two permutations of the universe, namely: $\sigma$ (which was $c$) and $\tau$ (which was $c_2$) with $\sigma = j\tau \cdot c$ and $\tau = j\sigma \cdot c$. **Theorem 1** Con(NF + AC$_2$) $\rightarrow$ Con(NF + AC$_2$ + Duality for formulae that are stratifiable-mod-2) *Proof:* It will suffice to establish that from the existence of these two functions $\sigma$ and $\tau$ whose existence-in-a-permutation-model was proved in lemma 5 in $V^\pi$ it follows that duality will hold in $V^\pi$ for formulae that are stratifiable-mod-2. If a formula $\phi$ is stratifiable-mod-2 then its variables can be assigned to two types $\text{yin}$ and $\text{yang}$ in such a way that in subformulae like ‘$x = y$’ the two variables receive the same type and in subformulae like ‘$x \in y$’ the two variables receive different types. Let us associate $\sigma$ to variables given type $\text{yin}$ in the assignment and associate $\tau$ to variables given type $\text{yang}$ in the assignment. ‘$x \in y$’ is equivalent to ‘$\sigma(x) \not\in \tau(y)$’ and if $x$ is of type $\text{yin}$ we make this replacement. ‘$x \in y$’ is also equivalent to ‘$\tau(x) \not\in \sigma(y)$’ and if $x$ is of type yang we make this replacement. We deal with equations analogously. In the rewritten version of $\phi$ we find that every variable ‘$x$’ of type yin now appears only as ‘$\sigma(x)$’ and that every variable ‘$y$’ of type yang now appears only as ‘$\tau(y)$’. So we can reletter ‘$\sigma(x)$’ as ‘$x$’, and ‘$\tau(y)$’ as ‘$y$’ and the result is $\hat{\phi}$. It’s worth bearing in mind that $\sigma$ and $\tau$ retain in $V^\pi$ all the stratified properties they had in their previous life in $V$, where they were $c$ and $c_2$. Thus they commute, and $\sigma^2 = \tau^2 = 1$. Observe also that $j(\sigma\tau) = j\sigma \cdot j\tau = \tau \cdot c \cdot c \cdot \sigma = \tau\sigma = \sigma\tau$, so $\sigma\tau$ is actually an $\varepsilon$-automorphism of $V^\pi$. It is a nontrivial automorphism because $\sigma$ and $\tau$ are not inverse to each other: $\tau$ has fixed points and $\sigma$ does not. By the remark in the proof of part (i) of lemma 2 it’s an involution. Can we use this technique to obtain models in which duality holds for formulae that are stratifiable-mod-$p$ for other primes? No. If we were to attempt to rejig the above development to obtain a proof for formulae that are stratifiable-mod-3 then we would be looking for an $i$ such that $c_i$ and $c_{i+3}$ are conjugate. To show that two involutions are conjugate we are likely to need AC$_2$, but unfortunately AC$_2$ will ensure that if we have two $c_i$ whose suffices are of different parity then precisely one of them will have fixed points. We see this most starkly in the case of formulae which are stratifiable-mod-1, which is to say all formulae. To find—by this method—a permutation model in which duality held for all formulae we would want the model to contain an antimorphism: a permutation $\tau$ such that $\tau = j\tau \cdot c$. This would involve finding a permutation $\tau$ in our home model such that $\tau$ and $j\tau \cdot c$ were conjugate. Unfortunately, as lemma 1 tells us, AC$_2$ implies, for all permutations $\tau$, that $j\tau \cdot c$ has fixed points iff $\tau$ has no odd cycles. So, in particular, $\tau$ and $j\tau \cdot c$ cannot be conjugate. Very well, so we drop AC$_2$, in the hope that this might open up the possibility of an involution $\tau$ such that $\tau$ and $j\tau \cdot c$ have the same cycle type. Such a $\tau$ would not be definable. But then we would need AC$_2$, after all, to show that $\tau$ and $j\tau \cdot c$ are conjugate. Clearly if we are to prove the relative consistency of the scheme $\phi \longleftrightarrow \hat{\phi}$ for all $\phi$ we need a new idea. I mentioned earlier that duality for sentences that are stratifiable-mod-2 is much weaker than the conjectured duality for all sentences. In one respect, however, the result we have just shown does more: the existence of the $\tau$ and $\sigma$ combining as above would appear to be more than is needed to establish duality for sentences that are stratifiable-mod-2: The existence of the $\tau$ and $\sigma$ stand to duality for sentences that are stratifiable-mod-2 in the same way that the existence of an antimorphism stands to full duality. In both cases the first party to the relation seems to be on the face of it much stronger than the second. The existence of an antimorphism certainly implies duality but the converse looks most unlikely, since the existence of an antimorphism strongly contradicts AC. It ought to be possible to obtain models of duality for sentences that are stratifiable-mod-2 without actually exhibiting functions that witness it. 6.1 Full Duality? It may be that the set of things fixed by \( \sigma\tau \) is a model of NF + full Duality. Something to check! First we check that \( \sigma\tau \) (which is the same as \( \tau\sigma \)) is an \( \in \)-automorphism. For all \( x \) and \( y \) we have \( x \in y \iff \sigma(x) \not\in \tau(y) \) so \( \sigma(x) \not\in \tau(y) \iff \tau\sigma(x) \in \sigma\tau(y) = \tau\sigma(y) \) so \( \tau\sigma \) is an \( \in \)-automorphism as desired. Next we check that if \( \pi \) is an \( \in \)-automorphism the set of fixed points is a model of NF. The big gap here is extensionality. We would have to show that every nonempty fixed set has a fixed member. Finally we check that the set of fixed points of \( \sigma\tau \) is additionally a model of duality. Observe that, for all such fixed \( x \) we have \( x = \sigma(\tau(x)) \) whence \( \sigma^{-1}(x) = \tau(x) \). But \( \sigma^2 = 1 \) so \( \sigma(x) = \tau(x) \). Now suppose \( x \) and \( y \) both fixed. Then \( x \in y \iff \sigma(x) \not\in \tau(y) = \sigma(y) \). So \( \sigma \) is an antimorphism of the fixed points. But this relies on the set of fixed points being extensional. It may be that we can ensure this by a judicious choice of the permutation in lemma 5. The current proof of that lemma just appeals to AC\(_2\) and it may be that a more refined analysis is possible. We seek a \( \pi \) that conjugates \( c \) to \( j^2c \cdot jc \cdot c \) and moreover has the extra feature that in \( V^\pi \) the set \( \{ x : \sigma(x) = \tau(x) \} \) is extensional. Must turn this into a condition on \( \pi \ldots \) I think \[ V^\pi \models (\forall x)(x \neq \emptyset \land \sigma\tau(x) = x \rightarrow (\exists y \in x)(\sigma\tau(y) = y)) \] is \[ (\forall x)(\pi(x) \neq \emptyset \land \sigma\tau(x) = x \rightarrow (\exists y \in \pi(x)(\sigma\tau(y) = y)) \] which becomes \[ (\forall x)(x \neq \emptyset \land j^2c \cdot jc(x) = x \rightarrow (\exists y \in \pi(x))(j^2c \cdot jc(y) = y)) \] where \( \pi \) conjugates \( c \) and \( j^2c \cdot jc \cdot c \). Let us write ‘\( F \)’ for \( \{ x : x = jc \cdot j^2c(x) \} \) to keep things readable. The \( \pi \) we seek has got to inject \( F \) into \( \{ y : y \cap F \neq \emptyset \} \)—a set i elsewhere notate “\( \mathcal{D}(F) \)”. \( \mathcal{D} \)’ is an upside-down ‘\( \mathcal{P} \)’ since \( \mathcal{D}(x) \) is \( V \setminus (\mathcal{P}(V \setminus x)) \) and is thus dual to \( \mathcal{P} \). Observe that \( \mathcal{D}(x) \) is always a moiety, since it is \( V \setminus (\mathcal{P}(V \setminus x)) \), and the complement of a power set (of anything other than \( V \)) is always the same size as \( V \). This is beco’s every set (other than \( V \) itself) is included in the complement of a singleton, and the power set of a complement of a singleton is a principal prime ideal and therefore a moiety. So there’s no problem on that score. It’s not blindingly obvious to me that it cannot be done. 7 Work still to do Are there anywhere in the world embeddings that are elementary for formulae that are stratifiable-mod-\( n \)? Between iterated CO models perhaps…? There remains of course the challenge of proving consistency of duality for all sentences, not merely those that are stratifiable-mod-2. But more to the point are the possibilities of extending to formulae that are stratifiable-mod-$n$ things known about the rather more restricted class of stratified formulae—and these I haven’t started thinking about. Here are some, in no particular order. Is there any interest in versions of Forti-Honsell Antifoundation along the lines “Every set picture that is a $n$-stratification graph is a picture of a set”? The axiom of counting is unstratified and not equivalent modulo NF to any stratified formula but is, for each concrete $n$, equivalent modulo NF to a formula that is stratifiable-mod-$n$. It’s also invariant. The same goes for AxCount$_{\leq}$ (with a bit more work) since—for any concrete $k$—AxCount$_{\leq}$ can be written as $(\forall n \in \mathbb{N})(n \leq T^k n)$. André Pétry suggests a generalisation of a result of his-and-mine alluded to earlier ([6], [10], and [11]) to the effect that if two structures are elementarily equivalent for formulae that are stratifiable-mod-$n$ then they have stratimorphic (as it were) ultrapowers. One could investigate whether the construction of [7] could be modified to encompass expressions that are stratifiable-mod-$n$. That looks messy. There are natural settings where one encounters embeddings that are elementary for stratifiable formulae, and where one might hope to get embeddings that are elementary for some of these larger classes of formulae. CO models is one setting: the embedding from the ground model into the hereditary low sets is elementary for stratifiable formulae. (That particular example is probably not a good one, because if the inclusion embedding is elementary for formulae that are stratifiable-mod-$n$ for even one $n$ then the hereditarily low sets cannot contain any Quine atoms). For another, let $\mathcal{M}$ be a structure for $\mathcal{L}$. Consider the class of those $m \in M$ s.t. $m$ is fixed by all permutations of $M$ that, for all $n$, are $j^n$ of something. It’s an elementary substructure as long as it’s extensional. Now use instead those permutations $\pi$ of $M$ s.t. $j^m \pi = 1$. Now the class of fixed things is a substructure elementary for expressions that are stratifiable mod $m$ (again, assuming extensionality). Str(ZF) is the theory axiomatised by the stratifiable axioms of ZF; by analogy str$_n$(ZF) will be the theory axiomatised by those axioms of ZF that are stratifiable-mod-$n$. ZF can be interpreted in str(ZF) + IO. (IO is the axiom “every set is the same size as a set of singletons”). Observe that IO is a theorem of str$_n$(ZF), since it proves that $u^n | x$ exists for all $x$, so ZF can be interpreted in str$_n$(ZF). At this stage i cannot see how to prove that str$_n$(ZF) = ZF. There are parallel questions about the fragments of Mac. Stratified parameter-free induction seems to prove no more that the nonexistence of a universal set. How about stratifiable-mod-$n$ parameter-free induction...what does that do? Every weakly stratifiable theorem of first-order logic has a cut-free weakly stratifiable proof; every stratifiable theorem of first-order logic has a stratifiable proof (Crabbé, [2]); are there analogues for stratification-mod-$n$? Stratifiable parameter-free $\in$-induction implies the nonexistence of the universal set. (If none of your members are the universal set, you can’t be either). It’s not known if the converse holds. However the strengthening of the converse one would consider in this context, namely “the non-existence of the universal set implies $\in$-induction for parameter-free formulae that are stratifiable-mod-$n$” clearly does not go through: $\in$-induction for parameter-free formulae that are stratifiable-mod-$n$” implies $(\forall x)(x \not\in^+ x)$, and that clearly doesn’t follow from the nonexistence of $V$. In stratifiable set theories one has to have for one’s pairing function something that gives ‘$x$’ and ‘$y$’ the same type in ‘$z = \langle x, y \rangle$’. This is to ensure that the composition of two relations always exists. Of course if we have separation for formulae that are stratifiable mod $n$ then we can allow the types of ‘$x$’ and ‘$y$’ in ‘$z = \langle x, y \rangle$’ to differ by any integer multiple of ‘$n$’. There is a connection here with the universal-existential conjecture for TCZT. The corresponding conjecture for TC$_2$T does not hold: the expression $(\forall x)(\exists y)(x \in y \longleftrightarrow y \not\in x)$ is universal-existential and is a wff of $LTC_2T$ but its truth-value is not constant on all models of TC$_2$T. It says that there is no $x = \overline{Bx}$. Now this last is a theorem of NF so it must be true at at least one of the two types—if both types contain an $x = \overline{Bx}$ we obtain a contradiction by enquiring about membership between them. It can [apparently\footnote{To be honest, this is guesswork on my part. To find a model of TC$_2$C in which this holds at one type and not at the other will need at least as much work as finding a model of NF.}] be true at both, so it’s universal-existential but its truth-value is not constant on all models of TC$_2$T. This formula also crops up in attempts to prove the consistency of duality by means of the Barwise approximants, in my notes in universal3.tex … but that may be mere coincidence. There is an old question about whether the atoms of a model of NFU can be indiscernible. We know that they are indiscernible wrt stratifiable formulae; now that we’ve started looking into stratification-mod-$n$ it is natural to wonder whether one might be able to show that the atoms of a model of NFU must be indiscernible wrt expressions that are stratifiable-mod-2. At this stage it’s not looking hopeful. Consider “$\Box$(Duality for sentences that are stratifiable-mod-2)” Is this consistent? Does it imply AC$_2$? ZF + Foundation and ZF + antifoundation are alike extensions of ZF + Coret’s axiom “every set is the same size as a wellfounded set” conservative for stratifiable sentences. Does this hold also for sentences that are stratifiable-mod-$n$? References [1] Crabbé, M. “Typical ambiguity and the Axiom of Choice”. Journal of Symbolic Logic 49, (1984) pp. 1074–1078. [2] Crabbé, M. “Stratification and Cut-elimination” Journal of Symbolic Logic 56, (1991) pp. 213–226. [3] Olivier Esser and Thomas Forster, “Relaxing stratification” Bull. Belg. Math. Soc. Simon Stevin 14 (2007), pp. 247–258. Also available from www.dpmms.cam.ac.uk/~tf/relaxing.pdf [4] Olivier Esser and Thomas Forster, “Seeking Structure for the Collection of Rieger-Bernays Permutation models” unpublished but available from www.dpmms.cam.ac.uk/~tf/permstalk.pdf [5] Thomas Forster “Set Theory with a Universal Set” Oxford Logic guides 20. Oxford University Press 1992. [6] Thomas Forster, “Permutation Models and Stratified Formulae, a Preservation Theorem” Zeitschrift für Mathematische Logic und Grundlagen der Mathematik, 36 (1990) pp 385–388. Also available from www.dpmms.cam.ac.uk/~tf/strZF.pdf [7] Thomas Forster, “AC fails in the natural analogues of $L$ and the cumulative hierarchy that model the stratified fragment of ZF”. Contemporary Mathematics 36 2004. Also available from www.dpmms.cam.ac.uk/~tf/zmlmany.pdf [8] Thomas Forster “A new Datatype of scansets and some Applications: Interpreting Mac in KF” unpublished but available at www.dpmsss.cam.ac.uk/~tf/scansets.pdf [9] M. Randall Holmes “The consistency of NF” [10] Pétry, A. “Une charactérisation algébrique des structures satisfaisant les mêmes sentences stratifiées” Cahiers du Centre de Logique (Louvain-la-neuve) 4, (1982) pp. 7–16. [11] Pétry, A. “Stratified languages” Journal of Symbolic Logic 57, (1992) pp. 1366–1376. [12] Zuhair Al-Johar, M. Randall Holmes and Nathan Bowler “The Axiom Scheme of Acyclic Comprehension” Notre Dame J. Formal Logic http://projecteuclid.org/info/euclid.ndjfl 55, Number 1, (2014), 1–155. http://projecteuclid.org/euclid.ndjfl/1390246432
A NOTE ON TT-GMRES FOR THE SOLUTION OF PARAMETRIC LINEAR SYSTEMS* OLIVIER COULAUD†, LUC GIRAUD‡, AND MARTINA IANNACITO§ Abstract. We study the solution of linear systems with tensor product structure using the Generalized Minimal RESidual (GMRES) algorithm. To manage the computational complexity of high-dimensional problems, our approach relies on low-rank tensor representation, focusing specifically on the Tensor Train format. We implement and experimentally study the TT-GMRES algorithm. Our analysis bridges the heuristic methods proposed for TT-GMRES by Dolgov [Russian J. Numer. Anal. Math. Modelling, 28 (2013), pp. 149–172] and the theoretical framework of inexact GMRES by Simoncini and Szyld [SIAM J. Sci. Comput., 25 (2003), pp. 454–477]. This approach is particularly relevant in a scenario where a \((d+1)\)-dimensional problem arises from concatenating a sequence of \(d\)-dimensional problems, as in the case of a parametric linear operator or parametric right-hand-side formulation. Thus, we provide backward error bounds that link the accuracy of the computed \((d+1)\)-dimensional solution to the numerical quality of the extracted \(d\)-dimensional solutions. This facilitates the prescription of a convergence threshold ensuring that the \(d\)-dimensional solutions extracted from the \((d+1)\)-dimensional result have the desired accuracy once the solver converges. We illustrate these results with academic examples across varying dimensions and sizes. Our experiments indicate that TT-GMRES retains the theoretical rounding-error properties observed in matrix-based GMRES. Key words. GMRES, inexact GMRES, backward stability, Tensor Train format AMS subject classifications. 65F10, 15A69, 65G50 1. Introduction. In numerous scientific and engineering domains, mathematical models often involve solving \(d\)-dimensional linear systems with a tensor product structure. Such systems can be represented as \[ Ax = b, \] where \(A\) is a multilinear operator acting on \(\mathbb{R}^{n_1 \times \cdots \times n_d}\), the tensor \(b\) represents the right-hand side, and the tensor \(x\) is the sought solution. As the number of dimensions \(d\) increases, the storage and computational costs grow exponentially — this phenomenon is commonly referred to as the “curse of dimensionality”. Addressing these challenges requires algorithms that balance accuracy with tractable computational and memory demands. Two main strategies have emerged for solving high-dimensional linear systems, one arising from optimization and one from the numerical linear algebra domain. The first approach is based on optimization methods. It includes the Alternating Linearized Scheme, the Modified Alternating Linearized Scheme [16], the Alternating Minimal Energy method [8], and the Density Matrix Renormalization Group approach [21]. These techniques break down the high-dimensional system into a sequence of lower-dimensional minimization subproblems, iteratively updating the solution. The second strategy extends iterative solvers from classical matrix computations, such as the conjugate gradient, the Generalized Minimal RESidual (GMRES), and the biconjugate gradient method [31], to high-dimensional spaces, introducing tensors and multilinear operators. This second class of methods uses well-established techniques within a tensor context, generalizing key properties and heuristics to high-dimensional linear systems, cf. [2, 7, 19]. *Received January 31, 2024. Accepted December 31, 2024. Published online on January 27, 2025. Recommended by Lars Grasedyck. †Concaee, Inria Center at the University of Bordeaux, France. O. Coulaud: ORCID: 0000-0003-2924-284X; L. Giraud: ORCID: 0000-0002-7062-7672. ‡Corresponding author. Dipartimento di Matematica and (AM)^2, Alma Mater Studiorum Università di Bologna, Piazza di Porta San Donato 5, I-40127 Bologna, Italy (firstname.lastname@example.org). ORCID: 0000-0003-3354-2538 High-dimensional problems pose challenges due to their computational demands and the prohibitive storage requirements of dense tensors, even in moderate dimensions. To address these issues, compression techniques such as High Order Singular Value Decomposition [6], Hierarchical Tucker [11], Tensor Train (TT) [22] and Tensor Network [20] decompositions are used. Among these, the TT format has gained particular attention due to its flexibility and efficiency in handling high-dimensional tensors. While tensor compression effectively reduces storage requirements, it introduces rounding errors that affect numerical computations, particularly for iterative solvers that heavily rely on compression. Balancing the trade-off between maintaining low ranks and achieving the desired level of accuracy is fundamental when developing an iterative solver. Assessing and controlling the propagation of rounding errors in iterative solvers has thus become a critical component of numerical analysis for high-dimensional problems. This work focuses on the analysis of GMRES with the Modified Gram–Schmidt orthogonalization kernel (MGS-GMRES) algorithm adapted to the Tensor Train format (TT-GMRES) for high-dimensional linear systems. Our TT-GMRES algorithm incorporates tensor compression at various steps of the iterative process, raising important questions about the stability and accuracy of the computed solutions. The first theoretical demonstration that MGS-GMRES is backward stable dates back to 2006. In [24], the authors analyse MGS-GMRES in the standard IEEE arithmetic. The fundamental assumptions are that the unit round-off $u$ bounds both the data representation and the rounding error of all the elementary floating-point operations. In [1], the authors consider a variable-accuracy framework for studying experimentally MGS-GMRES. In this context, the data storage precision is decoupled from the unit round-off that controls the rounding of floating-point operations. Additionally, it is assumed that the data storage precision is independent of the hardware and that the perturbation on the data is norm-wise bounded. Under these working hypotheses, they experimentally show that the backward stability of MGS-GMRES holds. Building on the theoretical backward stability results of [24] for MGS-GMRES in classical matrix computation, we examine our TT-GMRES within the variable accuracy framework. Specifically, this study experimentally investigates the interplay between tensor compression, inexact arithmetic and backward error analysis, linking these aspects to ensure robust performance in the tensor setting. Our TT-GMRES approach is compared with the heuristic TT-GMRES variant proposed in [7]. Additionally, we theoretically justify the heuristic proposed in [7] for TT-GMRES and link that variant with the theory of inexact GMRES properties, presented in [30]. Our experimental examples emphasize that TT-GMRES from [7] inherits the numerical features of the inexact GMRES variant. Furthermore, we investigate the relationship between TT-GMRES and the block-GMRES variant. Additionally, we provide backward error bounds that relate the quality of the computed $(d+1)$-dimensional solutions to the accuracy of the $d$-dimensional solutions extracted from them. This analysis is particularly relevant for parametric problems, which involve efficiently solving a sequence of $d$-dimensional problems by utilizing the tensor structure in a $(d+1)$-dimensional space. The theoretical findings are supported by numerical experiments showcasing the effectiveness of TT-GMRES on academic problems of varying dimensions and sizes. These experiments highlight that TT-GMRES can achieve accurate solutions while preserving the memory-efficient benefits of tensor compression. Furthermore, they confirm that TT-GMRES inherits desirable numerical stability properties similar to its matrix-based counterpart. The remaining sections of this paper are organized as follows. Section 2 provides the necessary background on tensors and their representation in TT format, along with the formulation of parametric problems. Section 3 introduces GMRES and its tensor variants, detailing the algorithmic structure of TT-GMRES. Section 4 establishes theoretical connections between inexact GMRES and the heuristics used for TT-GMRES from [7]. It presents the backward error bounds for parametric systems. Section 5 includes numerical experiments illustrating the algorithm’s performance and its application to high-dimensional test cases. Finally, Section 6 offers concluding remarks and directions for future research. 2. Preliminaries on tensors and parametric problems. To enhance readability, we utilize the following notation for the various mathematical objects described. Small Latin letters represent scalars and vectors (e.g., \(a\)), with the context clarifying the objects’ nature. Matrices are represented by capital Latin letters (e.g., \(A\)), tensors by bold small Latin letters (e.g., \(\mathbf{a}\)), the multilinear operators between two tensor spaces are denoted by bold calligraphic capital letter (e.g., \(\mathcal{A}\)), and the tensor representation of linear operators by bold capital Latin letters (e.g., \(\mathbf{A}\)). We use the “MATLAB notation”, that is, we denote all the indices along a mode with a colon (“:”). For example, if we are given a matrix \(A \in \mathbb{R}^{n \times n}\), then \(A(:, i)\) represents the \(i\)th column of \(A\). The tensor product is denoted by \(\otimes\) and the Kronecker product by \(\otimes_k\). The Euclidean inner product is denoted by \(\langle \cdot, \cdot \rangle\) for both vectors and tensors. We use \(\|\cdot\|\) to denote the Euclidean norm for vectors and the Frobenius norm for matrices and tensors. A linear operator \(\mathcal{A}: \mathbb{R}^{n_1 \times \cdots \times n_d} \to \mathbb{R}^{n_1 \times \cdots \times n_d}\) between tensor spaces is represented by a tensor \(\mathbf{A} \in \mathbb{R}^{(n_1 \times n_1) \times \cdots \times (n_d \times n_d)}\) with respect to the canonical basis. The L2 norm of the linear operator \(\mathcal{A}\) is denoted by \(\|\mathbf{A}\|_2\). If \(d = 2\), then the L2 norm of the matrix associated with a simpler linear operator between two linear vector spaces is considered. Section 2.1 describes the main key elements of the Tensor Train (TT) notation for tensors and linear operators between tensor product of spaces. The advantages of using this formalism to solve linear systems naturally defined in high-dimensional vector spaces are also presented. Section 2.2 examines the scenario where one of the linear operator modes is associated with a parameter. When dealing with parametric linear operators, our focus is on solving a single linear system for all discrete parameter values in TT format. Section 2.3 presents the scenario in which the right-hand sides depend on a parameter. We describe the construction of a unique linear system in TT format when there are multiple right-hand sides depending on a parameter. 2.1. The Tensor Train format. Let \(\mathbf{x}\) be a \(d\)-order tensor in \(\mathbb{R}^{n_1 \times \cdots \times n_d}\) and \(n_k\) the dimension of mode \(k\) for every \(k \in \{1, \ldots, d\}\). Storing the full tensor \(\mathbf{x} \in \mathbb{R}^{n_1 \times \cdots \times n_d}\) has a memory cost of \(\mathcal{O}(n^d)\) with \(n = \max_{i \in \{1, \ldots, d\}} \{n_i\}\), so several compression techniques have been proposed over the years to reduce the memory consumption [6, 11, 22]. For this work, the most suitable tensor representation is the Tensor Train (TT) format [22]. The main concept of TT is to represent a \(d\)-order tensor as the contraction of \(d\) 3-order tensors. This contraction is a generalization of the matrix–vector product to tensors. The Tensor Train representation of \(\mathbf{x} \in \mathbb{R}^{n_1 \times \cdots \times n_d}\) is \[ \mathbf{x} = \underline{\mathbf{x}}_1 \underline{\mathbf{x}}_2 \cdots \underline{\mathbf{x}}_d, \] where \(\underline{\mathbf{x}}_k \in \mathbb{R}^{r_{k-1} \times n_k \times r_k}\) is the \(k\)th TT core for \(k \in \{1, \ldots, d\}\), with \(r_0 = r_d = 1\). Note that \(\underline{\mathbf{x}}_1 \in \mathbb{R}^{r_0 \times n_1 \times r_1}\) and \(\underline{\mathbf{x}}_d \in \mathbb{R}^{r_{d-1} \times n_d \times r_d}\) reduce essentially to matrices, but for consistency in notation, we represent them as tensors. The \(k\)th TT core of a tensor is denoted by the same bold letter underlined, with a subscript \(k\). The value \(r_k\) is called the \(k\)th TT rank. Given an index \(i_k\), we denote the \(i_k\)th matrix slice of the \(k\)th TT core \(\underline{\mathbf{x}}_k\) with respect to mode 2 by \(\underline{X}_k(i_k)\), i.e., \(\underline{X}_k(i_k) = \underline{\mathbf{x}}_k(:, i_k, :)\). Each element of the TT tensor \(\mathbf{x}\) can be expressed as the product of \(d\) matrices, that is, \[ \mathbf{x}(i_1, \ldots, i_d) = \underline{X}_1(i_1) \cdots \underline{X}_d(i_d), \] with \(\underline{X}_k(i_k) \in \mathbb{R}^{r_{k-1} \times r_k}\) for every \(i_k \in \{1, \ldots, n_k\}\) and \(k \in \{2, \ldots, d - 1\}\), while \(\underline{X}_1(i_1) \in \mathbb{R}^{1 \times r_1}\) and \(\underline{X}_d(i_d) \in \mathbb{R}^{r_{d-1} \times 1}\). It is important to note that \(\underline{X}_1(i_1)\) and \(\underline{X}_d(i_d)\) are actually vectors, but for the sake of consistency, they are written as matrices with a single row or column. TT-format tensors are called TT vectors. In order to store a tensor in TT format, $O(dnr^2)$ units of memory are required, where $n = \max_{i \in \{1, \ldots, d\}} \{n_i\}$ and $r = \max_{i \in \{1, \ldots, d\}} \{r_i\}$. The memory footprint grows linearly with the tensor order and quadratically with the maximal TT rank. Therefore, knowing the maximal TT rank is usually sufficient to estimate the TT compression benefit. However, for more accuracy, we introduce the compression ratio measure. If $\mathbf{x} \in \mathbb{R}^{n_1 \times \cdots \times n_d}$ is a tensor in TT format, the compression ratio is the storage cost of $\mathbf{x}$ in TT format divided by the storage cost in dense format, i.e., $$\frac{\sum_{i=1}^{d} r_{i-1} n_i r_i}{\prod_{j=1}^{d} n_j},$$ where $r_i$ is the $i$th TT rank of $\mathbf{x}$. The TT ranks, $r_i$, must remain small to achieve a significant benefit from this formalism. One drawback of the TT format is that it may become less efficient when adding two TT vectors. Given two TT vectors $\mathbf{x}$ and $\mathbf{y}$ with $k$th TT ranks $r_k$ and $s_k$, respectively, the $k$th TT rank of $\mathbf{x} + \mathbf{y}$ is less than or equal to $r_k + s_k$ (see [9]). The TT formalism allows for the compressed expression of linear operators between tensor product spaces. Given a linear operator $\mathcal{A} : \mathbb{R}^{n_1 \times \cdots \times n_d} \rightarrow \mathbb{R}^{n_1 \times \cdots \times n_d}$, with the canonical basis fixed for $\mathbb{R}^{n_1 \times \cdots \times n_d}$, we associate with $\mathcal{A}$ the tensor $\mathbf{A} \in \mathbb{R}^{(n_1 \times n_1) \times \cdots \times (n_d \times n_d)}$ in the standard way. Therefore, a tensor associated with a linear operator between tensor product spaces will be referred to as a tensor operator. The TT representation of the tensor operator $\mathbf{A} \in \mathbb{R}^{(n_1 \times n_1) \times \cdots \times (n_d \times n_d)}$, commonly referred to as the TT matrix, is expressed as $$\mathbf{A} = \mathbf{a}_1 \cdots \mathbf{a}_d,$$ where $\mathbf{a}_k \in \mathbb{R}^{r_{k-1} \times n_k \times n_k \times r_k}$ is the $k$th TT core, with $r_0 = r_d = 1$. For every $i_k, j_k \in \{1, \ldots, n_k\}$ and $k \in \{1, \ldots, d\}$, let $\underline{A}_k(i_k, j_k) \in \mathbb{R}^{r_{k-1} \times r_k}$ be the $(i_k, j_k)$th slice with respect to mode $(2, 3)$ of $\mathbf{a}_k$. Therefore, the $(i_1, j_1, \ldots, i_d, j_d)$th entry of $\mathbf{A}$ can be expressed as $$\mathbf{A}(i_1, j_1, \ldots, i_d, j_d) = \underline{A}_1(i_1, j_1) \cdots \underline{A}_d(i_d, j_d).$$ The estimated storage cost remains the same as before, namely $O(dnmr^d)$, where $n = \max_{i \in \{1, \ldots, d\}} \{n_i\}$, $m = \max_{i \in \{1, \ldots, d\}} \{m_i\}$ and $r = \max_{i \in \{1, \ldots, d\}} \{r_i\}$. It is worth noting that the $k$th TT rank of the contraction of a TT matrix and a TT vector is less than or equal to the product of the $k$th TT rank of the two contracted objects, as explained in [9]. For example, given the TT matrix $\mathbf{A} \in \mathbb{R}^{n_1 \times m_1 \times \cdots \times n_d \times m_d}$ and the TT vector $\mathbf{x} \in \mathbb{R}^{n_1 \times \cdots \times n_d}$ with $k$th TT ranks $r_k$ and $s_k$, respectively, their contraction $\mathbf{b} = \mathbf{A}\mathbf{x}$ is a TT vector with $k$th TT rank less than or equal to $r_k s_k$. The potential growth of TT ranks is a crucial point in the implementation of algorithms using the TT formalism, as it may lead to a shortage of memory and prevent the computation from being completed. To address this issue, a rounding algorithm was proposed in [22] to reduce the TT ranks. The TT rounding algorithm takes a TT vector $\mathbf{x}$ and a relative accuracy $\delta$ as inputs and provides a TT vector $\tilde{\mathbf{x}}$ as output. The output TT vector $\tilde{\mathbf{x}}$ is at a relative distance $\delta$ from the input TT vector $\mathbf{x}$, i.e., $\|\mathbf{x} - \tilde{\mathbf{x}}\| \leq \delta \|\mathbf{x}\|$. The computational cost, in terms of floating-point operations, of a TT rounding over $\mathbf{x}$ is $O(dnr^3)$, as stated in [22], if $\mathbf{x} \in \mathbb{R}^{n_1 \times \cdots \times n_d}$ is a $d$-order TT vector with $r = \max_{i \in \{1, \ldots, d\}} \{r_i\}$ and $n = \max_{i \in \{1, \ldots, d\}} \{n_i\}$. ### 2.2. Parameter-dependent linear operators In this section and the following one, tensor slices play a central role, so we introduce some specific notation. Given a TT vector $\mathbf{a} \in \mathbb{R}^{n_1 \times \cdots \times n_d}$ with TT cores $\mathbf{a}_k \in \mathbb{R}^{r_{k-1} \times n_k \times r_k}$, $\mathbf{a}^{[k,i_k]}$ denotes the $i_k$th slice with respect to mode $k$. Henceforth, we will only take a slice with respect to the first mode, so instead of writing $\mathbf{a}^{[1,i_1]}$ for the $i_1$th slice on the first mode, we will simply write $\mathbf{a}^{[i_1]}$. Similarly, $\mathbf{A}^{[i_1]}$ represents the $(i_1,i_1)$th slice in mode $(1,2)$ of a tensor operator $\mathbf{A} \in \mathbb{R}^{(n_1 \times n_1) \times \cdots \times (n_d \times n_d)}$. This section focuses on a specific type of parametric tensor operator expressed as $\mathbf{A}_\alpha = \mathbf{B}_0 + \alpha \mathbf{B}_1$, where $\alpha \in \mathbb{R}$ and $\mathbf{B}_0$ and $\mathbf{B}_1$ are two tensor operators of $\mathbb{R}^{(n_1 \times n_1) \times \cdots \times (n_d \times n_d)}$. Assuming that $\alpha$ takes $p$ different real values in the interval $[a,b]$, we define $p$ linear systems of the form $$\mathbf{A}_\ell \mathbf{y}_\ell = \mathbf{b}_\ell,$$ where $\mathbf{A}_\ell = \mathbf{B}_0 + \alpha_\ell \mathbf{B}_1$, $\mathbf{b}_\ell \in \mathbb{R}^{n_1 \times \cdots \times n_d}$ and $\alpha_\ell \in [a,b]$ for every $\ell \in \{1,\ldots,p\}$. At this level, one can choose between either solving each system independently or solving them simultaneously in a higher-dimensional space. This latter choice will be referred to as the “all-in-one” approach. The “all-in-one” linear system can be expressed as $$\mathbf{A} \mathbf{x} = \mathbf{b},$$ where $\mathbf{A} \in \mathbb{R}^{(p \times p) \times (n_1 \times n_1) \times \cdots \times (n_d \times n_d)}$ is a tensor operator such that $$\mathbf{A}(h,\ell,i_1,j_1,\ldots,i_d,j_d) = \begin{cases} \mathbf{A}_\ell(i_1,j_1,\ldots,i_d,j_d) & \text{if } h = \ell, \\ 0 & \text{if } h \neq \ell, \end{cases}$$ and the right-hand side is $\mathbf{b} \in \mathbb{R}^{p \times n_1 \times \cdots \times n_d}$ defined as $$\mathbf{b}(\ell,i_1,\ldots,i_d) = \mathbf{b}_\ell(i_1,\ldots,i_d)$$ for $i_k,j_k \in \{1,\ldots,n_k\}$, $k \in \{1,\ldots,d\}$ and $\ell,h \in \{1,\ldots,p\}$. The tensor operator $\mathbf{A}$ is written in a compact form as $$\mathbf{A} = \mathbb{I}_p \otimes \mathbf{B}_0 + \text{diag}(\alpha_1,\ldots,\alpha_p) \otimes \mathbf{B}_1.$$ The $(\ell,\ell)$th slice of $\mathbf{A}$ with respect to modes $(1,2)$ is denoted by $$\mathbf{A}^{[\ell]} = \mathbf{B}_0 + \alpha_\ell \mathbf{B}_1 = \mathbf{A}_\ell,$$ and, similarly, the $\ell$th slice of $\mathbf{b}$ with respect to the first mode is $\mathbf{b}^{[\ell]} = \mathbf{b}_\ell$ by construction. Consequently, equation (2.1) can also be written as $$\mathbf{A}^{[\ell]} \mathbf{x}^{[\ell]} = \mathbf{b}^{[\ell]},$$ with $\mathbf{x}^{[\ell]} = \mathbf{y}_\ell$. This implies that, after solving the “all-in-one” system defined by equation (2.2), a specific parameter’s solution can be obtained by selecting a slice from the “all-in-one” solution along the parameter mode (first mode). This slice of the “all-in-one” solution is called the extracted solution. In other words, if an iterative solution is computed at iteration $k$, the extracted solution for the $\ell$th problem, $\mathbf{x}_k^{[\ell]}$, is the $\ell$th slice with respect to mode 1 of the $k$th iterate of the “all-in-one” system, $\mathbf{x}_k$, i.e., $$\mathbf{x}_k^{[\ell]} = \mathbf{x}_k(\ell,i_1,\ldots,i_d).$$ Section 4.3 examines the connection between the numerical quality of the extracted solution and the individual solution. 2.3. Parameter-dependent right-hand sides. This section considers a specific case of the “all-in-one” approach. The goal is to solve $p$ linear systems that share the same linear operator but have different right-hand sides. If $\mathbf{A}_0 \in \mathbb{R}^{(n_1 \times n_1) \times \cdots \times (n_d \times n_d)}$ is a linear tensor operator, the $\ell$th linear system is defined as $$\mathbf{A}_0 \mathbf{y}_\ell = \mathbf{b}_\ell,$$ where $\mathbf{b}_\ell \in \mathbb{R}^{n_1 \times \cdots \times n_d}$ for every $\ell \in \{1, \ldots, p\}$. To simultaneously solve all the right-hand sides expressed in equation (2.6), we repeat the construction introduced in Section 2.2, except that $\mathbf{A}_0$ is repeated on the “diagonal” of the tensor linear operator $\mathbf{A}$ defined in equation (2.3). Thanks to the tensor properties, the tensor operator $\mathbf{A} \in \mathbb{R}^{(p \times p) \times (n_1 \times n_1) \times \cdots \times (n_d \times n_d)}$ can be written as $$\mathbf{A} = \mathbb{I}_p \otimes \mathbf{A}_0,$$ so that $\mathbf{A}^{[\ell]} = \mathbf{A}_0$ for every $\ell \in \{1, \ldots, p\}$. The right-hand side $\mathbf{b}$ is defined similarly to the previous section, i.e., $\mathbf{b}^{[\ell]} = \mathbf{b}_\ell$. The case of multiple right-hand sides can be formulated and solved either as an “all-in-one” problem or as a block problem, as explained in Section 4.2. Furthermore, in Section 4.4, the quality of individual solutions is linked with the numerical quality of the “all-in-one” solution in this specific case. 3. Preliminaries on GMRES and block GMRES. This section provides an overview of the GMRES algorithm, and its matrix and tensor variants. In classical matrix computation, Section 3.1 describes the main properties of the GMRES algorithm. Section 3.2 presents the block variant of GMRES, which is used to solve linear systems with multiple right-hand sides in matrix format. Finally, Section 3.3 outlines the TT-GMRES algorithm. 3.1. Preconditioned GMRES in matrix computation. When using an iterative solver to compute the solution of a linear system, it is recommended to use a stopping criterion based on a backward error [12, 17, 24]. The iterative scheme should be stopped when the backward error becomes smaller than a user-prescribed threshold. This means that the current iterate can be considered as the exact solution of a perturbed problem where the relative norm of the perturbation is smaller than the threshold. Two norm-wise backward errors can be considered for iterative schemes. Let $Ax = b$ be the linear system to be solved. We can consider a norm-wise backward error on $A \in \mathbb{R}^{n \times n}$ and $b \in \mathbb{R}^n$. The norm-wise backward error associated with the approximate solution $x_k$ at iteration $k$ is denoted as $\eta_{A,b}(x_k)$ [15]. The following equality was proved in [26]: $$\eta_{A,b}(x_k) = \min_{\Delta A, \Delta b} \{\tau > 0 : \|\Delta A\| \leq \tau \|A\|, \|\Delta b\| \leq \tau \|b\| \text{ and } (A + \Delta A)x_k = b + \Delta b\}$$ $$= \frac{\|Ax_k - b\|}{\|A\|_2\|x_k\| + \|b\|}.$$ (3.1) In certain situations, a simpler backward error criterion based solely on perturbations in the right-hand side can also be considered, leading to the second possible choice: $$\eta_b(x_k) = \min_{\Delta b} \{\tau > 0 : \|\Delta b\| \leq \tau \|b\| \text{ and } Ax_k = b + \Delta b\}$$ $$= \frac{\|Ax_k - b\|}{\|b\|}.$$ (3.2) Starting from the zero initial guess, GMRES [29] constructs a series of approximations $x_k$ in Krylov subspaces of increasing dimension $k$ such that the residual norm of the sequence of iterates is decreasing over these nested spaces. More specifically, $$x_k = \arg\min_{x \in \mathcal{K}_k(A,b)} \|b - Ax\|,$$ with $$\mathcal{K}_k(A,b) = \text{span}\{b, Ab, \ldots, A^{k-1}b\}$$ being the $k$-dimensional Krylov subspace spanned by $A$ and $b$. In practice, a matrix $V_k = [v_1, \ldots, v_k] \in \mathbb{R}^{n \times k}$ with orthonormal columns and an upper Hessenberg matrix $\tilde{H}_k \in \mathbb{R}^{(k+1) \times k}$ are iteratively constructed using the Arnoldi procedure such that one has $\text{span}\{V_k\} = \mathcal{K}_k(A,b)$ and $$AV_k = V_{k+1}\tilde{H}_k, \quad \text{with} \quad V_{k+1}^T V_{k+1} = I_{k+1}.$$ This relation is often referred to as the Arnoldi relation. As a result, $x_k = V_k y_k$ with $$y_k = \arg\min_{y \in \mathbb{R}^k} \|\beta e_1 - \tilde{H}_k y\|,$$ where $\beta = \|b\|$ and $e_1 = (1, 0, \ldots, 0)^T \in \mathbb{R}^{k+1}$. In exact arithmetic, the following equality holds between the least-squares residual and the true residual $$\|\tilde{r}_k\| = \|\beta e_1 - \tilde{H}_k y\| = \|b - Ax_k\|.$$ In finite-precision calculation, the equality may no longer hold. However, it has been demonstrated that the GMRES method is backward stable with respect to $\eta_{A,b}$ [24]. This means that during the iterations $\eta_{A,b}(x_k)$ may decrease to $\mathcal{O}(u)$, where $u$ is the unit round-off of the floating-point arithmetic used for the calculations. Algorithm 1 provides an overview of GMRES. For a more detailed presentation, see [28, 29]. To control the memory footprint of the solver, a restart parameter is used to define the maximal dimension of the search Krylov space since the orthonormal basis $V_k$ must be stored. If the algorithm fails to converge after reaching the maximum dimension of the search space, it is restarted using the final iterate as the initial guess for a new cycle of GMRES. Furthermore, it is often necessary to consider a preconditioning to speed up convergence. Using right-preconditioned GMRES consists in considering a non-singular matrix $M$, the so-called preconditioner, which approximates the inverse of $A$ in some sense. In this case, the preconditioned system $AMt = b$ is solved using GMRES. The solution $t$ is then used to compute the solution of the original system, that is, $x = Mt$. Algorithm 2 outlines the right-preconditioned GMRES for a restart parameter $m$ and a convergence threshold $\varepsilon$. ### 3.2. Block GMRES in matrix computation. Block GMRES is a variant of GMRES that can be used to solve a linear system with multiple right-hand sides. The system is represented as $AX = B$ where $B = [b^{[1]}, \ldots, b^{[p]}] \in \mathbb{R}^{n \times p}$ and $X = [x^{[1]}, \ldots, x^{[p]}] \in \mathbb{R}^{n \times p}$. The algorithm uses a block variant of the Arnoldi relationship to build the search space, which is defined by the sum of the Krylov subspace associated with each of the right-hand sides, assuming the initial guess is zero. To simplify the explanation, we assume that the block of right-hand sides is full rank and that there is no partial convergence during the iterations. For a complete description of the latter situation and an efficient approach to deal with it, refer to [27]. The search space is $$\mathcal{K}_k(A,B) = \bigoplus_{i=1}^p \mathcal{K}_k(A,b^{[i]}).$$ Algorithm 1 $x, \text{hasConverged} = \text{GMRES}(A, b, m, \varepsilon)$ 1: **input:** $A, b, m, \varepsilon$ 2: $r_0 = b$, $\beta = \|r_0\|$ and $v_1 = r_0 / \beta$ 3: **for** $k = 1, \ldots, m$ **do** 4: $w = Av_k$ 5: **for** $i = 1, \ldots, k$ **do** ▶ MGS variant 6: $\bar{H}_{i,k} = \langle v_i, w \rangle$ 7: $w = w - \bar{H}_{i,k}v_i$ 8: **end for** 9: $\bar{H}_{k+1,k} = \|w\|$ 10: $v_{k+1} = w / \bar{H}_{k+1,k}$ 11: $y_k = \arg\min_{y \in \mathbb{R}^k} \|\beta e_1 - \bar{H}_ky\|$ 12: $x_k = V_k y_k$ 13: **if** $(\eta_{A,b}(x_k) < \varepsilon)$ **then** 14: $\text{hasConverged} = \text{True}$ 15: **break** 16: **end if** 17: **end for** 18: **return:** $x = x_k, \text{hasConverged}$ Algorithm 2 $x, \text{hasConverged} = \text{Right-GMRES}(A, M, b, x_0, m, \varepsilon)$ 1: **input:** $A, M, b, m, \varepsilon$ 2: $\text{hasConverged} = \text{False}$ 3: $x = x_0$ 4: **while** not (hasConverged) **do** 5: $r = b - Ax$ ▶ Iterative refinement step with at most $m$ GMRES iterations on $AM$ 6: $t_k, \text{hasConverged} = \text{GMRES}(AM, r, m, \varepsilon)$ 7: $x = x + Mt_k$ ▶ Update the preconditioned solution with the computed correction 8: **end while** 9: **return:** $x, \text{hasConverged}$ In this space, the $k$th iterate is defined as the minimum Frobenius norm of the block residual, which is given by $$X_k = \arg\min_{X \in K_k(A,B)} \|B - AX\|_F.$$ The residual norm of each individual right-hand side is minimized over the sum of the Krylov spaces associated with all right-hand sides. Therefore, if the $i$th column of the residual block is considered, the $k$th iterate associated with the $i$th right-hand side is $$x^{[i]}_k = \arg\min_{x \in K_k(A,B)} \|b^{[i]} - Ax^{[i]}\|.$$ For a more detailed discussion on the block GMRES variant, see [27, 28]. 3.3. Preconditioned GMRES in Tensor Train format. Let $A \in \mathbb{R}^{(n_1 \times n_1) \times \cdots \times (n_d \times n_d)}$ be a tensor operator and $b \in \mathbb{R}^{n_1 \times \cdots \times n_d}$ a tensor, then the general tensor linear system is $$Ax = b,$$ where \( x \in \mathbb{R}^{n_1 \times \cdots \times n_d} \). It is important to note that if we set \( d = 1 \), we obtain the standard linear system from classical matrix computation. To solve equation (3.3), we can use a tensor-extended version of GMRES. Since all operations involved in this iterative solver are feasible with the TT formalism, we assume that all objects are expressed in TT format. One major limitation of this approach is the repetition of additions and contractions in the various loops. This results in the growth of TT rank and potential memory overconsumption. Therefore, it is crucial to introduce compression steps in TT-GMRES. However, special attention must be paid to the selection of the TT rounding parameter to ensure that the prescribed GMRES tolerance \( \varepsilon \) can be achieved. The complete TT-GMRES algorithm is presented in Algorithm 3. **Algorithm 3** \( x, \text{hasConverged} = \text{TT-GMRES}(A, b, m, \varepsilon, \delta) \) 1: **input:** \( A, b, m, \varepsilon, \delta \) 2: \( r_0 = b, \beta = \|r_0\| \) and \( v_1 = (1/\beta)r_0 \) 3: **for** \( k = 1, \ldots, \text{maxit} \) **do** 4: \[ w = \text{TT-round}(Av_k, \delta) \] \( \triangleright \) MGS variant 5: **for** \( i = 1, \ldots, k \) **do** 6: \[ \bar{H}_{i,k} = \langle v_i, w \rangle \] 7: \[ w = w - \bar{H}_{i,k}v_i \] 8: **end for** 9: \( w = \text{TT-round}(w, \delta) \) 10: \[ \bar{H}_{k+1,k} = \|w\| \] 11: \[ v_{k+1} = (1/\bar{H}_{k+1,k})w \] 12: \[ y_k = \arg\min_{y \in \mathbb{R}^d} \| \beta e_1 - \bar{H}_ky \| \] 13: \[ x_k = \text{TT-round}\left( \sum_{j=1}^{k+1} y_k(j)v_j, \delta \right) \] 14: **if** \( (\eta_{A,b}(x_k)) < \varepsilon \) **then** 15: \[ \text{hasConverged} = \text{True} \] 16: **break** 17: **end if** 18: **end for** 19: **return:** \( x = x_k, \text{hasConverged} \) **Algorithm 4** \( x, \text{hasConverged} = \text{TT-Right-GMRES}(A, M, b, x_0, m, \varepsilon, \delta) \) 1: **input:** \( A, M, b, m, \varepsilon, \delta \) 2: \( \text{hasConverged} = \text{False} \) 3: \( x = x_0 \) 4: **while** not (hasConverged) **do** 5: \[ r = \text{TT-round}(b - Ax, \delta) \] \( \triangleright \) Iterative refinement step with at most \( m \) GMRES iterations on \( AM \) 6: \[ t_k, \text{hasConverged} = \text{TT-GMRES}(AM, r, m, \varepsilon, \delta) \] 7: \[ x = \text{TT-round}(x + Mt_k, \delta) \] \( \triangleright \) Update the unpreconditioned solution with the computed correction 8: **end while** 9: **return:** \( x, \text{hasConverged} \) Algorithms 3 and 4 introduce an additional input parameter, \( \delta \), which represents the TT rounding threshold. The TT rounding algorithm at accuracy \( \delta \) is applied to the result of the contraction between $A$ and the last Krylov basis vector computed in line 4, to the new Krylov basis vector after orthogonalization in line 9, and to the updated iterative solution in line 13. The purpose of this step is to balance the rank growth that the tensor contraction and addition of the previous steps may cause. Notice that the MGS orthogonalization kernel plays a key role in the rank growth. When an orthogonalization kernel is applied to low-rank TT vectors, it produces a set of TT vectors of larger TT ranks, to satisfy the orthogonality constraint. This growth can be balanced by the use of the TT rounding operation, which reduces the TT ranks and affects the orthogonality of the final basis. In [4], the authors study in detail this phenomenon for several orthogonalization kernels applied to TT vectors. As shown in the numerical experiments in Section 5, the TT rounding accuracy ($\delta$) must be less than or equal to the GMRES target accuracy ($\varepsilon$). 4. Some comments and observations on GMRES in tensor format. Section 4.1 compares the TT-GMRES algorithm in the variable accuracy framework described in Section 3.3 with the TT-GMRES algorithm from [7]. Furthermore, the connection between the TT-GMRES algorithm from [7] and the inexact GMRES theory is proved. In Section 4.2, two possible approaches for solving multiple right-hand sides are compared: the TT-GMRES algorithm applied with the “all-in-one” construction described in Section 2.3, and the block GMRES variant. Finally, Sections 4.3 and 4.4 describe backward error bounds in TT format for parametric linear systems and multiple right-hand sides, as introduced in Sections 4.3 and 4.4, respectively. 4.1. TT-GMRES with variable rounding versus inexact GMRES. In this section, we will first recall some of the existing results from the literature on GMRES and inexact GMRES, and then we will draw some connections with TT-GMRES. In exact arithmetic, two important properties hold for GMRES: the Arnoldi basis is perfectly orthogonal (i.e., $V_k^T V_k = I_k$), and, as a corollary, the least-squares residual norm is equal to the linear system residual norm. However, in finite precision computation, it is known that these two equalities do not hold any more [24]. Despite this, GMRES is backward stable, that is, $\eta_{A,b}(x_k) \approx O(u)$ when $\kappa_2(V_k) > 4/3$; cf. [24]. The $\kappa_2(V_k)$ can be used to detect the stagnation of the backward error. Indeed, the $\eta_{A,b}(x_k)$ will be $O(u)$ within the iteration at which $\kappa_2(V_k)$ is greater than $4/3$. Still, in exact arithmetic, the idea of relaxing the accuracy when performing the matrix–vector product in the Arnoldi procedure was first observed experimentally in [3]. The inaccuracy in the matrix–vector product is modelled by introducing a perturbation matrix $E_k$ (i.e., $w = (A + E_k)v_k$) whose relative norm defines the amount of inaccuracy. Later, a series of papers [10, 30, 32] provided theoretical justification, showing that in exact arithmetic the norm of $E_k$ can grow as the inverse of the residual norm of the linear system times a prescribed threshold $\eta$, while still ensuring that the attainable GMRES residual norm will reach this threshold $\eta$. These latter results motivated a heuristic proposed in the TT-GMRES algorithm described in [7] and [25]. The heuristic increases the TT rounding threshold proportionally to the inverse of the residual least-squares norm. The TT rounding computes a TT vector, which can be viewed as a perturbation of the original vector with a relative perturbation norm bounded by the threshold. Upon initial examination, an issue with this TT-GMRES algorithm is that the perturbation is applied to $w$, the outcome of the matrix–vector product, rather than the linear operator involved to compute it. However, we show below that the TT rounding can also be interpreted as a perturbation on the linear operator, which partially justifies the proposed heuristic. In Algorithm 3, during step 4, the computation $w = \text{TT-round}(Av_k, \delta)$ can be written as the application of an inexact version of the operator applied to $v_k$, that is, $$w = Av_k + \Delta w = (A + E_k)v_k,$$ where $\Delta w$ is a tensor whose norm is bounded by $\|\Delta w\| \leq \delta \|A v_k\|$ by a property of the TT rounding. In fact, if we think of $R$ as the Householder reflector [4] that maps the normalized TT vector $\Delta \tilde{w} = \Delta w / \|\Delta w\|$ to $v_k$, we do have $$w = (A + E_k)v_k,$$ where $E_k = \|\Delta w\| R$, so that $\|E_k\| = \|\Delta w\| \leq \delta \|A v_k\| \leq \delta \|A\|$. If a sufficiently good preconditioner is used, the linear operator seen by TT-GMRES is such that $\|A\| \approx 1$. This results in selecting the TT rounding threshold $\delta_k$ at the $k$th GMRES iteration as $$\delta_k = O(\|\beta e_1 - H_y y_k\|^{-1}),$$ as originally proposed in the first paper on inexact GMRES, then known as relaxed GMRES [3]. We have shown that a variable TT rounding strategy can be viewed as an inexact matrix–vector product in the inexact GMRES framework, but some other gaps need to be filled to fully assess the robustness of TT-GMRES. In particular, we lack the analysis related to the other roundings performed in Lines 9 and 13 of Algorithm 3. These missing theoretical pieces will be the subject of future work. ### 4.2. GMRES in tensor format versus GMRES and block GMRES in matrix computation. This section investigates the relationship between the iterates computed by GMRES applied on a $(d+1)$-mode tensor space to solve all the $d$-mode right-hand sides at once versus GMRES applied to the individual $d$-mode right-hand sides individually. Specifically, the iterates computed by the two approaches belong to the same Krylov space but are characterized by a different optimality condition for the residual norm minimization. The presentation uses classical $\mathbb{R}^n$ vector spaces to maintain simple notation. Using rigorous notation to describe the underlying principle in tensor spaces would make the notation very heavy, which might obscure the ideas. The problem of solving a linear system $AX = B$, where $A$, $X$ and $B$ are matrices of compatible dimensions such that $X = [x^{[1]}, \ldots, x^{[p]}]$ and $B = [b^{[1]}, \ldots, b^{[p]}]$, can be recast by the Kronecker product in a tensor-like structure as $$(I_p \otimes A) \begin{bmatrix} x^{[1]} \\ \vdots \\ x^{[p]} \end{bmatrix} = \begin{bmatrix} b^{[1]} \\ \vdots \\ b^{[p]} \end{bmatrix}.$$ Based on this structure of the linear operator, we can observe that the individual iterate is $x_k^{[i]} \in K_k(A, b^{[i]})$ and the residual norm associated with the global iterate is $$\|r_k\| = \left( \sum_{i=1}^p \|r_k^{[i]}\|^2 \right)^{1/2},$$ where $r_k^{[i]}$ is the residual associated with the individual iterate extracted from the global iterate. The coordinates of the individual iterates in the Krylov basis will be the same for all the iterates, and these iterates minimize the 2-norm of the sum of the squares of the individual square residual norms. If GMRES is run for each right-hand side separately, the iterate at step $k$ would also belong to $K_k(A, b^{[i]})$, but it would only minimize its own residual norm, which would consequently be lower than the one in the tensor case. Another option is to use a block-GMRES algorithm defined on $d$ mode tensors. As already mentioned in Section 3.2, at step $k$, each individual residual norm is minimized over the sum of the individual Krylov spaces. That is a dual situation in the solution of a $(d + 1)$-mode computation, where each solution is sought in its own Krylov space by minimizing the sum of the squares of all residual norms. 4.3. Backward error bounds for parametric operators. The purpose of the following propositions is to examine the relationship between the backward error of the “all-in-one” system solution and the extracted individual one. The equalities provided for the “all-in-one” system are true when the tensor and the tensor operators are given in full format, but they also hold in TT format. For further details on the “all-in-one” construction in TT format, refer to [5, Appendix C]. The bounds that will be proven allow us to adjust the convergence threshold when solving for multiple parameters, while ensuring a specific quality for the individual extracted solutions. Specifically, the bound presented in equation (4.1) of Proposition 4.1 indicates that if a certain accuracy $\varepsilon$ is expected for the extracted individual solution in terms of the backward error in (3.2), a more stringent convergence threshold should be used for the “all-in-one” system solution. This threshold should be set to $\varepsilon / \sqrt{p}$. **Proposition 4.1.** Given the “all-in-one” operator $A \in \mathbb{R}^{(p \times p) \times (n_1 \times n_1) \times \cdots \times (n_d \times n_d)}$ and the right-hand side $b \in \mathbb{R}^{n_1 \times \cdots \times n_d}$, as defined in equations (2.3) and (2.4), we consider the “all-in-one” system $$Ax = b.$$ Let $A_\ell \in \mathbb{R}^{(n_1 \times n_1) \times \cdots \times (n_d \times n_d)}$ be the tensor operator as in equation (2.5) and let $b_\ell \in \mathbb{R}^{n_1 \times \cdots \times n_d}$ be a tensor such that $\|b_\ell\| = 1$, which defines the individual linear systems $$A_\ell y_\ell = b_\ell,$$ where $A_\ell = A^{[\ell]}$ and $b_\ell = b^{[\ell]}$ for every $\ell \in \{1, \ldots, p\}$. If $x_k$ represents the “all-in-one” iterate, we have $$\eta_b(x_k)\sqrt{p} \geq \eta_{b_\ell}(x_k^{[\ell]})$$ for $\ell \in \{1, \ldots, p\}$. **Proof.** For the sake of simplicity, we use $\eta_b$ and $\eta_{b_\ell}$ squared throughout the proof and discard the subscript of the $k$th “all-in-one” iterate. The quantity $\eta_{b_\ell}^2(x^{[\ell]})$ is explicitly expressed as $$\eta_{b_\ell}^2(x^{[\ell]}) = \frac{\|A_\ell x^{[\ell]} - b_\ell\|^2}{\|b_\ell\|^2},$$ while $\eta_b^2(x)$ is written as $$\eta_b^2(x) = \frac{\|Ax - b\|^2}{\|b\|^2}.$$ Owing to the diagonal structure of $A$ and the definition of the Frobenius norm, equation (4.2) can be simplified to $$\eta_b^2(x) = \frac{\sum_{\ell=1}^n \| (Ax - b)^{[\ell]} \|^2}{\sum_{k=1}^p \| b^{[k]} \|^2} = \frac{\sum_{\ell=1}^n \| A_\ell x^{[\ell]} - b_\ell \|^2}{\sum_{k=1}^p \| b_k \|^2} = \frac{\sum_{\ell=1}^p \eta_{b_\ell}^2(x^{[\ell]})}{p},$$ since \( \|b\|^2 = \sum_{k=1}^{n} \|b_k\|^2 = p \). Taking the square root of both sides of this equation yields the desired result. For the backward error based on perturbation of both the linear operator and the right-hand sides, defined by equation (3.1), a similar result can be derived. **Proposition 4.2.** Based on the hypothesis and notation of Proposition 4.1 for \( \eta_{A,b}(x) \) and \( \eta_{A_\ell,b_\ell}(x^{[\ell]}) \) associated with the linear systems \( Ax = b \) and \( A_\ell y_\ell = b_\ell \), respectively, for every \( \ell \in \{1, \ldots, p\} \), then we have \[ \eta_{A,b}(x_k) \rho_\ell(x_k) \geq \eta_{A_\ell,b_\ell}(x^{[\ell]}) \quad \text{where} \quad \rho_\ell(x_k) = \frac{\|A\|_2 \|x_k\| + \sqrt{p}}{\|A_\ell x^{[\ell]}\| + 1}, \] with \( x_k \) being the \( k \)th “all-in-one” iterate and \( x^{[\ell]}_k \) being its \( \ell \)th slice with respect to mode 1. **Proof.** The subscript of the \( k \)th “all-in-one” iterate is dropped for simplicity. The backward error \( \eta_{A,b}(x) \) is explicitly written as \[ \eta_{A,b}(x) = \frac{\|Ax - b\|}{\|A\|_2 \|x\| + \|b\|}. \] Multiplying the previous equation by \( \eta_b(x) \) yields \[ \eta_{A,b}(x) = \frac{\|Ax - b\|}{\|A\|_2 \|x\| + \|b\|} \frac{\eta_b(x)}{\eta_b(x)} = \frac{\|b\|}{\|A\|_2 \|x\| + \|b\|} \eta_b(x) = \frac{\sqrt{p}}{\|A\|_2 \|x\| + \sqrt{p}} \eta_b(x) \] according to the definition of \( \eta_b(x) \), and \( \|b\| = \sqrt{p} \). Similarly, \( \eta_{A_\ell,b_\ell}(x^{[\ell]}) \) is expressed in terms of \( \eta_{b_\ell}(x^{[\ell]}) \) as \[ \eta_{A_\ell,b_\ell}(x^{[\ell]}) = \frac{\|b_\ell\|}{\|A_\ell\|_2 \|x^{[\ell]}\| + \|b_\ell\|} \eta_{b_\ell}(x^{[\ell]}) = \frac{1}{\|A_\ell\|_2 \|x^{[\ell]}\| + 1} \eta_{b_\ell}(x^{[\ell]}) \] since \( \|b_\ell\| = 1 \). By multiplying each side of equation (4.4) by \( (\|A\|_2 \|x\| + \sqrt{p}) \), it follows that \[ (\|A\|_2 \|x\| + \sqrt{p}) \eta_{A,b} = \eta_b \sqrt{p}. \] Owing to the result of Proposition 4.1, we have \[ (\|A\|_2 \|x\| + \sqrt{p}) \eta_{A,b}(x) = \eta_b(x) \sqrt{p} \geq \eta_{b_\ell}(x^{[\ell]}) = (\|A_\ell\|_2 \|x^{[\ell]}\| + 1) \eta_{A_\ell,b_\ell}(x^{[\ell]}) \] from equation (4.5). Dividing both sides of equation (4.6) by \( \|A_\ell\|_2 \|x^{[\ell]}\| + 1 \), we obtain \[ \frac{\|A\|_2 \|x\| + \sqrt{p}}{\|A_\ell x^{[\ell]}\| + 1} \eta_{A,b}(x) \geq \eta_{A_\ell,b_\ell}(x^{[\ell]}) \] because \( \|A_\ell\|_2 \|x^{[\ell]}\| \geq \|A_\ell x^{[\ell]}\| \) according to the definition of the L2 norm. The calculation of \( \rho_\ell(x_k) \) in equation (4.3) requires a little extra cost. **Corollary 4.3.** Let \( \{x_k\}_{k \in \mathbb{N}} \) be a sequence of iterative solutions and \( \nu \) a real value. If there exists a \( k^*_\ell \in \mathbb{N} \) such that \( |\|A_\ell x^{[\ell]}_k\| - 1| \leq \nu \) for every \( k \geq k^*_\ell \), then \[ \eta_{A,b}(x_k) \rho^*(x_k) \geq \eta_{A_\ell,b_\ell}(x^{[\ell]}) \quad \text{where} \quad \rho^*(x_k) = \frac{\|A\|_2 \|x_k\| + \sqrt{p}}{2 - \nu} \] for every \( \ell \in \{1, \ldots, p\} \) and for every \( k \in \mathbb{N} \) such that \( k \geq k^{**} \), where \( k^{**} = \max k^*_\ell \). This corollary provides a bound depending only on the “all-in-one” iterative solution, and it holds true for all the \( d \)-dimensional problems. 4.4. Backward error bounds for parametric right-hand sides. If the initial guess for \( x_0 \in \mathbb{R}^{p \times n_1 \times \cdots \times n_d} \) is the null tensor and \( b \) is defined as in (2.4), then at the \( k \)th iteration TT-GMRES minimizes with respect to \( x_k \) the norm of the residual \( r_k = Ax_k - b \) on the space \[ K_k(A, b) = \text{span}\{b, Ab, A^2b, \ldots, A^{k-1}b\}. \] In other words, we seek a tensor \( x_k \in K_k(A, b) \) such that \[ x_k = \argmin_{x \in K_k(A, b)} \|Ax - b\|. \] The Frobenius norm of \( r_k = Ax_k - b \), due to the diagonal structure of \( A \) defined by (2.7), is naturally written as follows: \[ \|r_k\|^2 = \sum_{\ell=1}^{p} \|b_\ell - A_0 x_k^{[\ell]}\|^2. \] **Proposition 4.4.** Given the “all-in-one” operator \( A \in \mathbb{R}^{(p \times p) \times (n_1 \times n_1) \times \cdots \times (n_d \times n_d)} \) and the right-hand side \( b \in \mathbb{R}^{p \times n_1 \times \cdots \times n_d} \), as defined in equations (2.7) and (2.4), we consider the “all-in-one” system \[ Ax = b. \] Let \( b_\ell \in \mathbb{R}^{n_1 \times \cdots \times n_d} \) be a tensor such that \( \|b_\ell\| = 1 \), which defines the individual linear systems \[ A_0 y_\ell = b_\ell, \] where \( b_\ell = b^{[\ell]} \) for every \( \ell \in \{1, \ldots, p\} \). If \( x_k \) represents the “all-in-one” iterate, we have \[ \eta_b(x_k) \sqrt{p} \geq \eta_{b_\ell}(x_k^{[\ell]}) \] for \( \ell \in \{1, \ldots, p\} \). **Proof.** The proof is based on equation (4.8) using similar arguments to those for the proof of Proposition 4.1. Similarly to Proposition 4.2, an informative bound of lower practical interest can be derived. **Proposition 4.5.** Under the hypothesis of Proposition 4.2, if \( A = I_p \otimes A_0 \), then for \( \eta_{A,b}(x) \) and \( \eta_{A_\ell,b_\ell}(x^{[\ell]}) \) associated with the linear systems \( Ax = b \) and \( A_0 y_\ell = b_\ell \), respectively, the following inequality holds, \[ \eta_{A,b}(x_k) \psi_\ell(x_k) \geq \eta_{A_\ell,b_\ell}(x_k^{[\ell]}) \quad \text{where} \quad \psi_\ell(x_k) = \frac{\|A_0\|_2 \|x_k\| + \sqrt{p}}{\|A_0 x_k^{[\ell]}\| + 1} \] for every \( \ell \in \{1, \ldots, p\} \). **Proof.** This result follows from the thesis of Proposition 4.2, since \( \|A\|_2 = \|A_0\|_2 \). The bound of Corollary 4.3 remains valid in the multiple right-hand-side structure described in this section. The thesis does not depend on the repetition of the same of the operator. 5. Numerical experiments. This section investigates the numerical behavior of the TT-GMRES solver for linear problems with increasing dimension, as it naturally arises in some partial differential equation (PDE) studies. The TT operators of our numerical examples are directly constructed in TT format, thanks to their peculiarity. In this section, we present numerical aspects related to convergence of the algorithm and the computational cost, with a focus on memory growth and memory savings. All experiments were conducted using Python 3.6.9 and the tensor toolbox ttpy 1.2.0 [23]. The problem we will address involves Laplace-like operators. A Laplace-like tensor operator, $A \in \mathbb{R}^{n_1 \times m_1 \times \cdots \times n_d \times m_d}$, is the sum of operators written as $$A = M_1 \otimes R_2 \otimes R_3 \otimes \cdots \otimes R_{d-2} \otimes R_{d-1} \otimes R_d$$ $$+ L_1 \otimes M_2 \otimes R_3 \otimes \cdots \otimes R_{d-2} \otimes R_{d-1} \otimes R_d$$ $$+ \cdots + L_1 \otimes L_2 \otimes L_3 \otimes \cdots \otimes L_{d-2} \otimes M_{d-1} \otimes R_d$$ $$+ L_1 \otimes L_2 \otimes L_3 \otimes \cdots \otimes L_{d-2} \otimes L_{d-1} \otimes M_d,$$ where $L_k, M_k, R_k \in \mathbb{R}^{n_k \times m_k}$ for every $k \in \{1, \ldots, d\}$. These linear operators are expressed in TT format with TT rank 2, that is, $$A = \begin{bmatrix} L_1 & M_1 \end{bmatrix} \otimes \begin{bmatrix} L_2 & M_2 \\ 0 & R_2 \end{bmatrix} \otimes \cdots \otimes \begin{bmatrix} L_{d-1} & M_{d-1} \\ 0 & R_{d-1} \end{bmatrix} \otimes \begin{bmatrix} M_d \\ R_d \end{bmatrix},$$ as proven in [18, Lemma 5.1]. The expression for the discrete $d$-dimensional Laplacian on a uniform grid of $n$ points in each direction is $$\Delta_d = \Delta_1 \otimes I_n \otimes \cdots \otimes I_n + \cdots + I_n \otimes I_n \otimes \cdots \otimes \Delta_1,$$ where $I_n$ is the identity matrix of size $n$, and $\Delta_1 \in \mathbb{R}^{n \times n}$ is the discrete one-dimensional Laplacian using the central-point finite difference scheme with discretization step $h = 1/(n + 1)$, that is, $$\Delta_1 = \frac{1}{h^2} \begin{bmatrix} -2 & 1 & 0 & \cdots & 0 \\ 1 & -2 & 1 & \cdots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & \cdots & 1 & -2 & 1 \\ 0 & 0 & \cdots & 1 & -2 \end{bmatrix}.$$ The TT expression of $\Delta_d$ is $$\Delta_d = \begin{bmatrix} I_n & \Delta_1 \end{bmatrix} \otimes \begin{bmatrix} I_n & \Delta_1 \\ \circ & I_n \end{bmatrix} \otimes \cdots \otimes \begin{bmatrix} I_n & \Delta_1 \\ \circ & I_n \end{bmatrix} \otimes \begin{bmatrix} \Delta_1 \\ I_n \end{bmatrix}. \tag{5.1}$$ To efficiently solve linear systems, we use an approximation of the inverse of the discrete Laplacian operator, $M$, as a preconditioner [13, 14]. This operator can be written as $$M = \sum_{k=-q}^{q} c_k \exp(-t_k \Delta_1) \otimes \cdots \otimes \exp(-t_k \Delta_1), \tag{5.2}$$ where $c_k = \xi t_k$, $t_k = \exp(k\xi)$ and $\xi = \pi/q$. The TT ranks of $M$ will be at least $2q + 1$, based on the previously stated property of the sum of TT tensors. To examine the primary numerical characteristics of the TT-GMRES implementations discussed in the previous sections, we analyze the classical convection–diffusion equation, which is the same as that examined in [7] and is expressed as \begin{equation} \begin{cases} -\Delta u + 2y(1 - x^2) \frac{\partial u}{\partial x} - 2x(1 - y^2) \frac{\partial u}{\partial y} = 0 & \text{in} \quad \Omega = [-1, 1]^3, \\ u_{\{y=1\}} = 1 & \text{and} \quad u_{\partial \Omega \setminus \{y=1\}} = 0. \end{cases} \tag{5.3} \end{equation} We set a grid of $n$ points per mode over $[-1, 1]^3$ and discretize the Laplacian as shown in equation (5.1) with $d = 3$. The discretization of the first derivative of $u$ with respect to mode 1, $\nabla_x$, is defined as $\nabla_x = \nabla_1 \otimes \mathbb{I}_n \otimes \mathbb{I}_n$. Similarly, the discrete first derivative with respect to mode 2, $\nabla_y$, is written as $\nabla_y = \mathbb{I}_n \otimes \nabla_1 \otimes \mathbb{I}_n$, where $\nabla_1$ is the order-two central finite difference matrix, i.e., $$\nabla_1 = \frac{1}{2h} \begin{bmatrix} 0 & 1 & 0 & \ldots & 0 \\ -1 & 0 & 1 & \ldots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & \ldots & -1 & 0 & 1 \\ 0 & 0 & \ldots & -1 & 0 \end{bmatrix}.$$ Let $v : [-1, 1]^3 \to \mathbb{R}^2$ be a function such that $v(x, y, z) = (2y(1 - x^2), -2x(1 - y^2))$. The components of $v$ are discretized over the Cartesian grid set on $[-1, 1]^3$, defining two tensors $V_1, V_2 \in \mathbb{R}^{(n \times n) \times (n \times n) \times (n \times n)}$ such that $V_1 = \text{diag}(1 - x^2) \otimes \text{diag}(2y) \otimes \mathbb{I}_n$ and $V_2 = \text{diag}(-2x) \otimes \text{diag}(1 - y^2) \otimes \mathbb{I}_n$. The diffusion term $D$ discretized is expressed as \begin{align} D &= V_1 \bullet \nabla_x + V_2 \bullet \nabla_y \\ &= \text{diag}(1 - x^2) \nabla_1 \otimes \text{diag}(2y) \otimes \mathbb{I}_n + \text{diag}(-2x) \otimes \text{diag}(1 - y^2) \nabla_1 \otimes \mathbb{I}_n. \end{align} The operator passed to the TT-GMRES algorithm is $A = -\Delta_3 + D$. The right-hand side is represented by the TT vector $b \in \mathbb{R}^{n \times n \times n}$ and the initial guess is the zero TT vector $x_0$. To ensure rapid convergence, we use the right preconditioner $M$ from equation (5.2), as in [7], for this test example. The preconditioner TT matrix $M$ is always computed by a number of addends $q$ equal to a quarter of the grid step dimension. To keep the TT rank of the preconditioner small, we choose to round it to $10^{-2}$. The choice of the number of addends and of the TT rounding compression are further discussed in [5, Appendix A]. When a right preconditioner is used, TT-GMRES actually solves the linear system $AMt = b$. To evaluate the convergence of the right-preconditioned TT-GMRES, we display the convergence history of $\eta_{AM,b}$, which is defined as $$\eta_{AM,b}(t_k) = \frac{\|AMt_k - b\|}{\|AM\|_2 \|t_k\| + \|b\|},$$ with $t_k$ being the preconditioned approximate solution at the $k$th iteration. We compute the norm of the residual, the norm of the right-hand side, and the norm of the iterative preconditioned approximate solution. The L2-norm of the preconditioner operator $AM$ is computed using a sampling approximation. Let $W$ be a set of normalized TT vectors, randomly generated from a normal distribution. A lower bound for $\|AM\|_2$ can be found as the maximum norm of the image of the elements of $W$ through $AM$, i.e., $$\tau_{AM} = \max_{w \in W} \|AMw\| \leq \|AM\|_2.$$ As a consequence, the backward error will be estimated as \[ \eta_{\text{AM},b}(t_k) \leq \frac{\|A M t_k - b\|}{\tau_{\text{AM}} \|t_k\| + \|b\|}. \] In the following numerical experiments, we report on this quantity, where \( \tau_{\text{AM}} \) is computed using 10 elements of \( W \). ### 5.1. Main features and robustness properties. A link between the TT-GMRES variant proposed in [7] and inexact TT-GMRES is established in Section 5.1.1. It is shown that a robust stopping criterion based on the backward error with perturbation on both the linear operator and the right-hand side is suitable for the inexact TT-GMRES algorithm. Additionally, the backward stability of inexact TT-GMRES is experimentally investigated in Section 5.1.2. #### 5.1.1. Comparison of inexact GMRES and classical GMRES in TT format. This section presents a comparison of the numerical behavior of TT-GMRES with constant TT rounding as described in Algorithm 3 and its inexact variant introduced in [7]. In the inexact variant, the TT rounding threshold at step 4 of Algorithm 3 is increased as \( \| \tilde{r}_k \|^{-1} \), where \( \tilde{r}_k = \| \beta e_1 - H_k y_k \| \). The numerical behavior is evaluated through the convergence history of the norm-wise backward error of the preconditioned system, \( \eta_{\text{AM},b}(t_k) \), as defined by (3.1). Figure 5.1 displays the convergence history of the norm-wise backward error for the TT-GMRES with constant TT rounding accuracy and the inexact TT-GMRES with varying TT rounding accuracy (referred to as “inexact” in the legend of the curves). We consider an initial rounding accuracy \( \delta \in \{10^{-3}, 10^{-5}, 10^{-8}\} \) and perform 50 iterations of full GMRES (i.e., without restart). The test example is a 3D convection diffusion problem with \( n = 63 \) discretization points in each mode, with preconditioner \( M \) from equation (5.2) with \( q \in \{16, 32\} \). ![Figure 5.1](image) **Fig. 5.1.** Convergence history of \( \eta_{\text{AM},b} \) for TT-GMRES and inexact TT-GMRES applied to 3D convection diffusion problem with \( n = 63 \). The primary observation is that TT-GMRES and its inexact variant exhibit very similar convergence behavior, since all convergence histories of \( \eta_{\text{AM},b} \) overlap. Upon examining the convergence history of \( \eta_{\text{AM},b} \), we observe that TT-GMRES with constant TT rounding accuracy inherits the backward stability property of GMRES in matrix computation. Specifically, for each value of \( \delta \), the backward error \( \eta_{\text{AM},b}(t_k) \) decreases and stagnates around \( \delta \). If $\delta$ represents the TT rounding accuracy and $t_k$ represents the GMRES solution at iteration $k$, then $\eta_{\text{AM},b}(t_k)$ is $O(\delta)$ since $\delta$ is the dominant part of the TT rounding error that occurs during the numerical calculation. Therefore, assuming $\delta \approx \varepsilon$, TT-GMRES can ensure a $\delta$ backward stable solution. Inexact TT-GMRES also succeeds in reducing the backward error to a value close to $\delta$, indicating that it might also be backward stable. The main advantage of this approach is demonstrated in Figure 5.2a, where increasing the TT rounding threshold throughout the iterations results in a significant decrease in the maximum TT rank of the Arnoldi basis vector (i.e., in the memory footprint). As illustrated in Figure 5.2b, the iterative solutions obtained from the two TT-GMRES variants have the same TT rank at each iteration. It is important to note that the TT rank is displayed as a dashed line once the TT-GMRES variants have reached their attainable accuracy. Finally, Figure 5.2c illustrates the evolution of $\delta_k$ during the iterations and highlights the significant difference between the TT rounding accuracy initial value and its final one. ![Fig. 5.2. Memory request of TT-GMRES and relaxed TT-GMRES applied to the 3D convection–diffusion problem with $n = 63$.](image) In the next section, we will evaluate some other numerical properties of inexact TT-GMRES, which are identical to those known and theorized for classical GMRES in matrix computation. ### 5.1.2. Inexact TT-GMRES backward stability: an experimental illustration As already stated, GMRES is backward stable in matrix computation [24]. Specifically, it is known that when the condition number of the Arnoldi basis $\kappa(V_k)$ exceeds $4/3$, the backward error $\eta_{A,b}$ is close to the machine precision of the working arithmetic. We demonstrate numerically that this property also applies to inexact TT-GMRES. Let $V_k = \{v_1, \ldots, v_k\}$ be the set of TT vectors of the Arnoldi TT basis. The condition number of $V_k$, $\kappa(V_k)$, is computed as the condition number of the $R$ factor of the MGS-QR factorization of $V_k$. Refer to [4] for a description of the MGS-QR factorization of a set of TT vectors. We test three different grid dimensions for the 3D convection–diffusion problem, namely $n \in \{63, 127, 255\}$, with preconditioner $M$ from equation (5.2) with $q \in \{16, 32\}$ and a single TT rounding threshold. The convergence history of $\eta_{\text{AM},b}$ is shown in Figure 5.3. The horizontal dashed-dotted black line represents the TT rounding initial accuracy $\delta$, and the vertical dashed blue line indicates the iteration where $\kappa(V_k)$ becomes larger than $4/3$. Let $V_k^T V_k$ denote the Gram matrix associated with the Arnoldi basis set, $\mathcal{V}_k$. The loss of orthogonality of the Arnoldi basis, computed as $\|I_k - V_k^T V_k\|$, is displayed with a dashed green curve in Figure 5.3. Similarly to the theoretical matrix computation result for GMRES, the backward error $\eta_{\text{AM,b}}$ of inexact TT-GMRES reaches an attainable accuracy of $\mathcal{O}(\delta)$ when $\kappa(\mathcal{V}_k) \geq 4/3$ for the three examples. This shows that $\eta_{\text{AM,b}} \leq \delta$ is a robust stopping criterion for inexact TT-GMRES. ![Fig. 5.3](image1.png) (a) $n = 63$. (b) $n = 127$. (c) $n = 255$. **Fig. 5.3.** Convergence history of $\eta_{\text{AM,b}}$ versus loss of orthogonality for the 3D convection–diffusion problem using $\delta = 10^{-5}$. Finally, we illustrate once again the memory benefits of the inexact TT-GMRES variant in Figure 5.4. Figure 5.4a shows the maximum TT rank of the last Krylov vector in the basis, while Figure 5.4b shows the memory gain compared to storing the tensor in full format for the entire Arnoldi basis. In the latter plot, for the largest example (i.e., $n = 255$), less than 0.03% of the memory required for a full tensor GMRES computation is necessary when using the inexact TT-GMRES. This illustrates that the curse of dimensionality can be overcome by such a linear solver. ![Fig. 5.4](image2.png) (a) Maximal TT rank of the last Krylov vector. (b) Compression ratio for the entire Krylov basis. **Fig. 5.4.** The 3D convection–diffusion problem using $\delta = 10^{-5}$. We consider only the inexact TT-GMRES variant in the following experiments reported in this paper, since this variant experimentally shows similar numerical behavior to the TT-GMRES with remarkable memory advantages. 5.2. Solution of parameter-dependent linear operators. This section focuses on four-dimensional PDEs, namely parametric convection–diffusion. The domain of the problem is obtained as a Cartesian product of a three-dimensional space domain and an additional parameter space. The main idea behind this section is to solve for all discrete parameter values simultaneously, resulting in an “all-in-one” solution. The structure of the operator allows for numerical evaluation of the theoretical bounds stated in Section 4.3. The parametric convection–diffusion problem is defined as \[ \begin{cases} -\alpha \Delta u + 2y(1-x^2) \frac{\partial u}{\partial x} - 2x(1-y^2) \frac{\partial u}{\partial y} = 0 & \text{in } \Omega = [-1, 1]^3, \\ u_{\{y=1\}} = 1 & \text{and } u_{\partial \Omega \setminus \{y=1\}} = 0. \end{cases} \] If a grid of \( n \) points along each direction of \( \Omega \) is defined, the final discrete operator of this PDE is \( A_\alpha = \alpha \Delta_3 + D \), where \( \alpha \in [1, 10] \) and \( D \) is defined in equation (5.4). Similarly, the right-hand side \( c_\alpha \in \mathbb{R}^{n \times n \times n} \) depends on the parameter \( \alpha \in [1, 10] \) due to the boundary conditions. To solve for multiple discrete values of \( \alpha \), we can tensorize \( \Delta_3 \) and \( D \) by a diagonal matrix, adding a fourth dimension. This allows us to solve for all the parameter values simultaneously using the tensor operator \( A \in \mathbb{R}^{(p \times p) \times (n \times n) \times (n \times n) \times (n \times n)} \) such that \[ A = A \otimes \Delta_d + I_p \otimes D, \] where \( A = \text{diag}(\alpha_1, \ldots, \alpha_p) \) and \( \alpha_i \in [1, 10] \) logarithmically distributed for \( i \in \{1, \ldots, p\} \). The “all-in-one” problem’s right-hand side is represented by \( b \in \mathbb{R}^{p \times n \times n \times n} \), where \[ b^{[\ell]} = \frac{1}{\|c_{\alpha_\ell}\|} c_{\alpha_\ell} \quad \text{for } \ell \in \{1, \ldots, p\}, \] using the slice notation introduced in Section 2.1. By construction, \( \|b\| = \sqrt{p} \), which implies that the discrete “all-in-one” problem fits the hypothesis of Propositions 4.2 and 4.5. Note that the “all-in-one” linear operator is constructed directly as a TT matrix from the TT matrix of the single linear system. On the other hand, the “all-in-one” right-hand side is first constructed as a full tensor and then converted into a TT vector. TT-GMRES is utilized to solve the “all-in-one” linear system for \( n \in \{63, 127, 255\} \) and \( p = 20 \). The preconditioner \( M \), defined in equation (5.2) with value \( q \in \{16, 32\} \), is tensorized with the identity \[ M = I_p \otimes \overline{M}. \] Figures 5.5a and 5.5b show the convergence history of \( \eta_{AM,b} \) and the loss of orthogonality for \( n = 127 \) and 255. The vertical dashed blue line indicates the iteration \( k \) such that \( \kappa_2(V_k) \) is larger than \( 4/3 \), where \( V_k \) is the set of the TT vectors of the basis of the Krylov space. Figure 5.5c displays the compression ratio for the entire Krylov basis. These findings are consistent with those presented in Section 5.1.2 and confirm the observations made in that section. In conclusion, inexact TT-GMRES appears to be \( \delta \) backward stable and enables substantial memory savings. Hence, the larger the problem, the greater the savings. First, we examine the tightness of the bound presented in Proposition 4.1. Figure 5.6 shows the convergence history of \( \eta_b \). The \( \eta_{b_1} \) curve dominates the others during the first half of the iterations for all values of \( n \). In the optimal case, the difference between \( \eta_{b_1} \) and \( \eta_b \) is less than one order of magnitude. Although the individual linear systems do not converge similarly, the bound is quite tight during convergence and slightly more pessimistic once convergence is reached. It is also noticeable that the convergence history is monotonic for the “all-in-one” residual, as expected, but not for the individual ones. To plot the bound for $\eta_{\mathbf{A} \mathbf{M}, \mathbf{b}}$ from Proposition 4.2, we define a vector $v_\ell \in \mathbb{R}^w$. The $k$th component of $v_\ell$ corresponds to the value of the coefficient $\rho_\ell$ from equation (4.3) evaluated for the solution at the $k$th iteration, i.e., $$v_\ell(k) = \rho_\ell(t_k) \quad \text{for every} \quad k \in \{1, \ldots, w\},$$ where $w$ is the number of iterations considered. Let $\ell_m$ and $\ell_M$ be the parameter indices for which the norm of $v_\ell$ is minimal and maximal, respectively, that is, $$\ell_m = \argmin_{\ell \in \{1, \ldots, p\}} \|v_\ell\| \quad \text{and} \quad \ell_M = \argmax_{\ell \in \{1, \ldots, p\}} \|v_\ell\|. \tag{5.6}$$ In our specific case, these indices are equal to 1 and 14, respectively. Figure 5.7 displays $\eta_{\mathbf{A} \mathbf{M}, \mathbf{b}}(t_k)$ scaled by $\rho_\ell$ (see equation (4.3) from Proposition 4.2) and by $\rho^*$ (see equation (4.7) from Corollary 4.3) versus $\eta_{\mathbf{A}_\ell \mathbf{M}, \mathbf{b}_\ell}(t_k^{[\ell]})$ for $\ell \in \{1, 14\}$ and for all the values of $n$. The three scaled curves overlap starting from the third iteration for all the grid dimensions, indicating that the scaling coefficient approximation given by $\rho^*$ is highly accurate in this example. It is observed that the orange curve corresponding to $\eta_{\mathbf{A}_5 \mathbf{M}, \mathbf{b}_5}$ and the blue one for $\eta_{\mathbf{A}_{20} \mathbf{M}, \mathbf{b}_{20}}$ frequently intersect with a difference of at most one order. Furthermore, the difference between $\eta_{\mathbf{A}_5 \mathbf{M}, \mathbf{b}_5}$ and $\eta_{\mathbf{A} \mathbf{M}, \mathbf{b}}$ scaled by $\rho_5$ is less than one order of magnitude in the optimal case, and... not larger than two in the worst case. Thus, we conclude that the bound of the “all-in-one” for the individual solution is quite tight for this PDE. Note that no extra computation is required to estimate $\rho^*$, while the norm of $A_\ell M t_k^{[\ell]}$ has to be computed to obtain the value of $\rho_\ell(t_k)$. ![Fig. 5.7. Convergence history of the $\eta_{\text{AM},b}$ bound for 4D convection–diffusion parametric operators using $\delta = 10^{-5}$.](image) ### 5.3. Solution of parameter-dependent right-hand sides This subsection aims to illustrate the solution of multiple convection–diffusion problems (5.3) with different right-hand sides. The discretization of the equation (5.3) operator over a Cartesian grid of $n$ points per mode for the domain $\Omega = [0, 1]^3$ is denoted $A_0$. The right-hand-side discretization defined in Section 5.1.2 is represented by $b \in \mathbb{R}^{n \times n \times n}$. The individual linear system is defined as $$A_0 u_\ell = b + e_\ell,$$ where $e_\ell \in \mathbb{R}^{n \times n \times n}$ is a realization of the normal distribution $\mathcal{N}(0, 1)$ for every $\ell \in \{1, \ldots, p\}$. To solve the $p$ problems simultaneously, we define the “all-in-one” tensor linear operator $A \in \mathbb{R}^{(p \times p) \times (n \times n) \times (n \times n) \times (n \times n)}$ as $$A = I_p \otimes (-\Delta_3),$$ while the “all-in-one” right-hand side is $c \in \mathbb{R}^{p \times n \times n \times n}$ such that $$c(\ell, i_1, i_2, i_3) = b(i_1, i_2, i_3) + e_\ell(i_1, i_2, i_3)$$ for every $i_k \in \{1, \ldots, n_k\}$ and, $\ell \in \{1, \ldots, p\}$ for $k \in \{1, \ldots, 3\}$. The problem is solved for $n \in \{63, 127\}$ and $p = 20$. The preconditioner stated in (5.5) with $q \in \{7, 10\}$ is used, and a small TT rank is imposed to $e_\ell$, resulting in a maximum TT rank of 11 for $c$. Figure 5.8 displays results that confirm, on another example, the observations made in Section 5.1.2 regarding the backward stability and the memory savings of inexact TT-GMRES. Figure 5.9 illustrates the bound presented in Proposition 4.4 for $\eta_b$. Since all right-hand sides converge simultaneously, the bound is not very tight, and the gap is mostly due to $\sqrt{p}$. The convergence of the individual right-hand sides is monotonic, although this is not guaranteed by any theoretical argument. Figure 5.10 shows the bound described in Proposition 4.5, which is quite tight during the first iterations and becomes looser towards the end, differing by more than one order of magnitude. As in the previous section, we calculate $\ell_m$ and $\ell_M$ using equation (5.6) to determine which curves to plot in Figure 5.10. The resulting bound, displayed in Figure 5.10, is quite tight, differing by slightly less than one order of magnitude. As previously, the three scaled curves overlap from the second iteration. Fig. 5.8. Convergence history of $\eta_{\text{AM,b}}$ versus loss of orthogonality and compression ratio for the 4D multiple right-hand sides convection–diffusion problem using $\delta = 10^{-5}$. Fig. 5.9. The 4D convection–diffusion problem $\eta_b$ bound using $\delta = 10^{-5}$ and $\varepsilon = 10^{-16}$. Fig. 5.10. The 4D multiple right-hand-sides convection–diffusion problem $\eta_{\text{AM,b}}$ bound using $\delta = 10^{-5}$ and $\varepsilon = 10^{-16}$. 6. Concluding remarks. This work addresses the efficient solution of linear systems with tensor product structure using a GMRES algorithm based on low-rank Tensor Train representation. Focusing on mitigating the computational complexity in terms of computation and memory requirements for high-dimensional linear systems, our contributions unfold two key aspects. First, we establish a connection between GMRES in tensor format and its classical matrix counterparts, elucidating the relationship with inexact GMRES theory and a heuristic proposed for GMRES in Tensor Train format. Second, we provide backward error bounds that relate the accuracy of the \((d+1)\)-dimensional computed solution to the numerical quality of the sequence of \(d\)-dimensional solutions extracted from it. This allows for the prescription of a convergence threshold for the \((d+1)\)-dimensional problem that ensures the desired numerical quality of the \(d\)-dimensional solutions upon convergence. Our results are substantiated by academic examples of different dimensions and sizes, which demonstrate the practical applicability and theoretical foundation of our approach. We especially focus on the demonstrated effectiveness of inexact GMRES in the Tensor Train format. Numerically, we observe that it inherits properties established for GMRES in the matrix case. Filling the gap completely, proving the \(\delta\) backward stability of inexact GMRES in Tensor Train format represents a direction for future research. Furthermore, the inexact TT-GMRES algorithm still carries some intrinsic drawbacks. The use of an efficient preconditioner is crucial to quickly reach the attainable accuracy, as the memory requirement increases with the number of iterations. Therefore, the development of effective preconditioners for multilinear operators remains a challenging open question. Finally, the theoretical and numerical examples presented in this work focus on the case of a low-rank TT operator that depends on a single parameter. The low-rank assumption is fundamental to ensure the applicability of iterative schemes such as TT-GMRES. If the considered low-rank operator depends linearly on multiple parameters, such as the stationary heat equation with heat conductivity coefficient piecewise constant on several discs from [19], the backward error bounds presented in Sections 4.3 and 4.4 can be generalized. The generalization to multiple parameters is straightforward for \(\eta_h\), while more tedious computations are required for \(\eta_{A,b}\). In this framework, it is also possible to develop bounds where only certain parameters of interest are included as variables, while the remaining parameters are kept fixed. The study of these particular bounds could be the subject of future research. Acknowledgements. The experiments presented in this paper were carried out using the PlaFRIM experimental testbed, supported by Inria, CNRS (LABRI and IMB), Université de Bordeaux, Bordeaux INP and Conseil Régional d’Aquitaine (see https://www.plafrim.fr). REFERENCES [1] E. Agullo, O. Coulaud, L. Giraud, M. Iannacito, G. Marait, and N. Schenkels, *The backward stable variants of GMRES in variable accuracy*, Tech. Report RR-9483, Inria Bordeaux Sud-Ouest, Bordeaux, 2022. [2] J. Ballani and L. Grasedyck, *A projection method to solve linear systems in tensor format*, Numer. Linear Algebra Appl., 20 (2013), pp. 27–43. [3] A. Bouras and V. Fraysse, *Inexact matrix–vector products in Krylov methods for solving linear systems: a relaxation strategy*, SIAM J. Matrix Anal. Appl., 26 (2005), pp. 660–678. [4] O. Coulaud, L. Giraud, and M. Iannacito, *On some orthogonalization schemes in tensor train format*, Tech. Rep. RR-9491, Inria Bordeaux Sud-Ouest, Bordeaux, 2022. [5] ———, *A robust GMRES algorithm in tensor train format*, Tech. Report RR-9484, Inria Bordeaux Sud-Ouest, Bordeaux, 2022. [6] L. De Lathauwer, B. De Moor, and J. Vandewalle, *A multilinear singular value decomposition*, SIAM J. Matrix Anal. Appl., 21 (2000), pp. 1253–1278. [7] S. V. Dolgov, *TT-GMRES: solution to a linear system in the structured tensor format*, Russian J. Numer. Anal. Math. Modelling., 28 (2013), pp. 149–172. [8] S. V. Dolgov and D. V. Savostyanov, *Alternating minimal energy methods for linear systems in higher dimensions*, SIAM J. Sci. Comput., 36 (2014), pp. A2248–A2271. [9] P. Gellss, *The Tensor-Train Format and Its Applications*, PhD. Thesis, Freie Universität Berlin, Berlin, 2017. [10] L. Giraud, S. Gratton, and J. Langou, *Convergence in backward error of relaxed GMRES*, SIAM J. Sci. Comput., 29 (2007), pp. 710–728. [11] L. Grasedyck, *Hierarchical singular value decomposition of tensors*, SIAM J. Matrix Anal. Appl., 31 (2009/10), pp. 2029–2054. [12] A. Greenbaum, *Iterative Methods for Solving Linear Systems*, SIAM, Philadelphia, 1997. [13] W. Hackbusch and B. N. Khoromskaia, *Low-rank Kronecker-product approximation to multi-dimensional nonlocal operators. I. Separable Approximation of multi-variate functions*, Computing, 76 (2006), pp. 177–202. [14] ———, *Low-rank Kronecker-product approximation to multi-dimensional nonlocal operators. II. HKT representation of certain operators*, Computing, 76 (2006), pp. 203–225. [15] N. J. Higham, *Accuracy and Stability of Numerical Algorithms*, 2nd ed., SIAM, Philadelphia, 2002. [16] S. Holtz, T. Rohwedder, and R. Schneider, *The alternating linear scheme for tensor optimization in the tensor train format*, SIAM J. Sci. Comput., 34 (2012), pp. A683–A713. [17] J. Drkošová, M. Rozložník, Z. Strakos, and A. Greenbaum, *Numerical stability of the GMRES method*, BIT Numer. Math., 35 (1995), pp. 309–330. [18] V. A. Kazeev and B. N. Khoromskij, *Low-rank explicit QTT representation of the Laplace operator and its inverse*, SIAM J. Matrix Anal. Appl., 33 (2012), pp. 742–758. [19] D. Kressner and C. Tobler, *Low-rank tensor Krylov subspace methods for parametrized linear systems*, SIAM J. Matrix Anal. Appl., 32 (2011), pp. 1288–1316. [20] R. Orús, *A practical introduction to tensor networks: matrix product states and projected entangled pair states*, Ann. Physics, 349 (2014), pp. 117–158. [21] I. V. Oseledets, *DMRG approach to fast linear algebra in the TT-format*, Comput. Methods Appl. Math., 11 (2011), pp. 382–393. [22] ———, *Tensor-train decomposition*, SIAM J. Sci. Comput., 33 (2011), pp. 2295–2317. [23] ———, *ttpy*, Software package, 2015. https://github.com/oseledets/ttpy [24] C. C. Paige, M. Rozložník, and Z. Strakos, *Modified Gram–Schmidt (MGS), least squares, and backward stability of MGS-GMRES*, SIAM J. Matrix Anal. Appl., 28 (2006), pp. 264–284. [25] D. Palitta and P. Kürschner, *On the convergence of Krylov methods with low-rank truncations*, Numer. Algorithms, 88 (2021), pp. 1383–1417. [26] J.-L. Rigal and J. Gaches, *On the compatibility of a given solution with the data of a linear system*, J. Assoc. Comput. Mach., 14 (1967), pp. 543–548. [27] M. Robbé and M. Sadkane, *Exact and inexact breakdowns in the block GMRES method*, Linear Algebra Appl., 419 (2006), pp. 265–285. [28] Y. Saad, *Iterative Methods for Sparse Linear Systems*, 2nd ed., SIAM, Philadelphia, 2003. [29] Y. Saad and M. H. Schultz, *GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems*, SIAM J. Sci. Statist. Comput., 7 (1986), pp. 856–869. [30] V. Simoncini and D. B. Szyld, *Theory of inexact Krylov subspace methods and applications to scientific computing*, SIAM J. Sci. Comput., 25 (2003), pp. 454–477. [31] C. Tobler, *Low-rank tensor methods for linear systems and eigenvalue problems*, PhD. Thesis, ETH Zürich, Zürich, 2012. [32] J. van den Eshof and G. L. G. Sleijpen, *Inexact Krylov subspace methods for linear systems*, SIAM J. Matrix Anal. Appl., 26 (2004), pp. 125–153.
Direct Regulatory Role of NKT Cells in Allogeneic Graft Survival Is Dependent on the Quantitative Strength of Antigenicity Keunhee Oh, Sanghee Kim, Se-Ho Park, Hua Gu, Derry Roopenian, Doo Hyun Chung, Yon Su Kim and Dong-Sup Lee *J Immunol* 2005; 174:2030-2036; doi: 10.4049/jimmunol.174.4.2030 http://www.jimmunol.org/content/174/4/2030 **References** This article cites 42 articles, 22 of which you can access for free at: http://www.jimmunol.org/content/174/4/2030.full#ref-list-1 **Why *The JI*? Submit online.** - **Rapid Reviews! 30 days** from submission to initial decision - **No Triage!** Every submission reviewed by practicing scientists - **Fast Publication!** 4 weeks from acceptance to publication *average **Subscription** Information about subscribing to *The Journal of Immunology* is online at: http://jimmunol.org/subscription **Permissions** Submit copyright permission requests at: http://www.aai.org/About/Publications/JI/copyright.html **Email Alerts** Receive free email-alerts when new articles cite this article. Sign up at: http://jimmunol.org/alerts *The Journal of Immunology* is published twice each month by The American Association of Immunologists, Inc., 1451 Rockville Pike, Suite 650, Rockville, MD 20852 Copyright © 2005 by The American Association of Immunologists All rights reserved. Print ISSN: 0022-1767 Online ISSN: 1550-6606. Direct Regulatory Role of NKT Cells in Allogeneic Graft Survival Is Dependent on the Quantitative Strength of Antigenicity Keunhee Oh, Sanghee Kim, Se-Ho Park, Hua Gu, Derry Roopenian, Doo Hyun Chung, Yon Su Kim, and Dong-Sup Lee The role of NKT cells during immune responses is diverse, ranging from antiviral and antitumor activity to the regulation of autoimmune diseases; however, the regulatory function of CD1d-dependent NKT cells in rejection responses against allogeneic graft is uncertain. In this study, we demonstrated the direct regulatory effects of CD1d-dependent NKT cells using an allogeneic skin transplantation model. H-Y-mismatched skin graft survival was shortened in CD1d−/− recipients compared with wild-type recipients. Adoptive transfer of syngenic NKT cells via splenocytes or hepatic mononuclear cells into CD1d−/− recipients restored graft survival times to those of wild-type recipients. α-Galactosylceramide, a specific activator of NKT cells, further prolonged graft survival. Although CD1d-dependent NKT cells did not extend skin graft survival in either major or complete minor histocompatibility-mismatched models, these cells affected graft survival in minor Ag mismatch models according to the magnitude of the antigenic difference. The afferent arm of NKT cell activation during transplantation required CD1d molecules expressed on host APCs and the migration of CD1d-dependent NKT cells into grafts. Moreover, the regulatory effects of CD1d-dependent NKT cells against alloantigen were primarily IL-10 dependent. Taken together, we concluded that CD1d-dependent NKT cells may directly affect the outcome of allogeneic skin graft through an IL-10-dependent regulatory mechanism. The Journal of Immunology, 2005, 174: 2030–2036. Natural killer T cells have been identified as a unique population of cells that express both TCRs and NK cell receptors. They secrete large amounts of IL-4 and IFN-γ upon stimulation with their TCRs (1). The phenotypic characteristics of NKT cells include the expression of NK1.1, IL-2Rβ (CD122), and memory/activated phenotype markers such as CD44high, CD69high, and Ly6Chigh (2). The majority of NKT cells use a highly biased and evolutionary conserved TCR repertoire (Vα14-Jα281 in mice and Vα24 in human) (3). As reported in previous studies, the activation of mouse NKT cells occurs by presentation of glycolipid on CD1d molecules (4, 5). Although the nature of the natural activating ligand remains an unanswered but important issue, a marine sponge-derived glycolipid, α-galactosylceramide (α-GalCer), potently activates NKT cells (6). The role of NKT cells during immune response has been reported to be diverse, ranging from antiviral and antitumor activity (7–9) to the regulation of autoimmune diseases (10). The numbers of NKT cells were selectively reduced in autoimmune-prone mice in association with disease development (11), and the germline deletion of the CD1 locus exacerbated disease in NOD mice (12), whereas repeated stimulation of NKT cells with α-GalCer reduced disease severity (13, 14). NKT cells have been exploited in several organ transplantation systems. They are critical for the induction of Ag-specific tolerance to xenogeneic islet cells, induced by anti-CD4 mAbs (15). NKT cells also mediate the tolerogenic action of anti-LFA-1 and anti-ICAM-1 Ab in an allogeneic heart graft model (16). However, the underlying mechanisms of their actions are unknown. NKT cells are crucial in corneal allograft survival (17), where they constitute a key component of anterior chamber-associated immune privilege (18, 19). However, the significance of CD1d-dependent NKT cells in the modulation of rejection responses against allogeneic transplantation is uncertain and may be less potent than that previously reported for CD4+CD25+ regulatory T cells (20, 21). In this study, we investigated the direct role of CD1d-dependent NKT cells in skin allotransplantation by titrating their regulatory capacity. We demonstrate that the presence of CD1d-dependent NKT cells affects allograft survival and that these cells have differential regulatory capacities that are dependent on the magnitude of the antigenic differences. Materials and Methods Animals CD1d-deficient mice on a C57BL/6 (B6; H-2b) background (designated as B6.CD1d−/−) were produced, bred, and maintained in specific pathogen-free conditions at the animal facility of the Clinical Research Institute of Seoul National University Hospital. CD1d-deficient mice on a BALB/c (H-2b) background were purchased from The Jackson Laboratory and... backcrossed from 129 to BALB/c mice to create third and fourth generation BALB/c background animals with multiple minor differences compared with wild-type BALB/c (designated BALB/c CD1d\(^{+/+}\)N3 and BALB/c CD1d\(^{-/-}\)N4, respectively). B6, B6.H-2\(^{bm12}\) (bml1), B6.H-2\(^{bm12}\) (bml12), B6.H-2\(^{bm12}\) (bml12), and B6.H-2\(^{bm12}\) (bml12) mice were originally derived from The Jackson Laboratory and obtained by backcrossing to our Clinical Research Institute. H13- and H28-negative mice were originally derived from a minor Ag of BALB/c origin. The animal protocol for experiments was reviewed and approved by the Ethics Committee of the Seoul National University. **Abs and flow cytometry** 11B11 (anti-IL-4), R4C6.A2 (anti-IFN-\(\gamma\)), and JK136 (anti-NK1.1) Abs were purified from ascites fluid by affinity chromatography. The following pairs of mAbs for detecting mouse cytokines were purchased from BD Pharmingen: 11B11 and biotinylated BV6D2-42G2 for IL-4, R4-6A2, and biotinylated XMG1.2 for IFN-\(\gamma\), and biotinylated SXC-1 for IL-10. The following mAbs were used for FACS staining: PE-Cy5 (Cy5-conjugated anti-CD4(H129.19)), FITC- and R-PE-conjugated anti-CD8 (53-6.7), PE-anti-CD25 (7D4), PE-anti-CD45RB (23G5), PE-anti-CD45RB (23G2), PE-anti-CD62L (ME-14), PE-anti-CD69 (2E3), PE-anti-NK1.1 (PK136), FITC-anti-CD1d (1B1), and Cy5c-anti-TCR\(\beta\) (H57-597). These mAbs were purchased from BD Pharmingen. Draining axillary lymph node (LN) and splenic cell suspensions from graft recipients were stained for T cell activation using standard procedures as previously described (22). In brief, early activation was monitored using anti-CD25 and anti-CD69 Abs; CD44, CD45RB, and CD62L staining were also included to monitor memory/activated T cells. **Skin graft** Donor tail skin was grafted as previously described (23). Briefly, recipient mice anesthetized with 3-bromoethanol were grafted with an \(\sim 5 \times 6\)-mm piece of donor tail skin onto the left abdominal region. In most cases, single pieces of skin from two different donors were grafted alongside each other. Tapered grafts survived 8 days after grafting, and the grafts were observed daily for up to 60 days. Some of the host mice were injected i.p. with 6 \(\mu\)g of \(\alpha\)-GalCer 7 and 3 days before grafting and twice weekly after grafting. Some of the recipient mice were treated with either IL-10 blocking or IL-4 blocking Ab twice per week, starting 7 days before receiving the grafts. **Synthesis of \(\alpha\)-GalCer** The \(\alpha\)-anomeric form of galactosylceramide (\(\alpha\)-GalCer) was synthesized using the method developed by Kim et al. (24) and dissolved in PBS containing 0.5% Tween 20 at a concentration of 220 \(\mu\)g/ml. **Adoptive transfer of splenic lymphocytes and hepatic mononuclear cells** Recipient B6.CD1d\(^{+/+}\) mice were lightly irradiated (600 rad), and after 1 day were injected i.v. with \(1.2 \times 10^6\) splenocytes from wild-type B6 mice. Skin grafting was performed 7 days later. Briefly, \(3.5 \times 10^6\) wild-type hepatic mononuclear cells were injected i.v. into some unirradiated recipients (B6.CD1d\(^{+/+}\)) 3 days before skin transplantation. **RT-PCR for cytokines and Va14** Spleens, LNs, and grafted skin were frozen immediately upon removal. Total RNA was isolated using TRIzol reagent (Invitrogen Life Technologies), and 2 \(\mu\)g of the total RNA was reverse transcribed into cDNA using Moloney murine leukemia virus-reverse transcriptase (Promega) and random primer. Primers used for the PCRs of IL-10, TGF-\(\beta\), and IFN-\(\gamma\) have been previously described (25, 26). The sequences of the primers used for the PCR of TCR Va14 and GAPDH were as follows: Va14 sense, 5'-CCAGGACCTGGCGGTCAACA-3'; Va14 antisense, 5'-CAGCATGACAAATCAGCTTGAGTCCGC-3'; and GAPDH sense, 5'-CCCACCTAACTCATGGGGG-3' and antisense, 5'-ATCCACAGCTTTCTGGGTTGGG-3'. PCRs were performed in 20-\(\mu\)l reaction volumes over 35 amplification cycles (45 s at 95°C, 45 s at 62°C, and 45 s at 72°C). **Real-time PCR for cytokines** The relative mRNA expressions of cytokine genes were quantified using real-time PCR. 18S ribosomal RNA was used as an internal standard to estimate variation between samples. All primers and probes used for PCR have been described previously (27). The primer sets used are listed in Table I. PCR was performed in a 25-\(\mu\)l reaction volume containing 3 \(\mu\)l of cDNA, 900 nM each of sense and antisense primers, 250 nM each of labeled probes for cytokines and 18S rRNA, and 12.5 \(\mu\)l of TaqMan universal PCR master mix (Roche Diagnostics). Fluorescent dye emission were monitored in real time during PCR amplification using the ABI PRISM 7900HT Sequence Detection System (PE Applied Biosystems). Levels of cytokine mRNA were calculated using (26) this formula: relative mRNA expression = \(2^{-(Ct \text{ of cytokine} - Ct \text{ of 18S rRNA})} \times 10^{10}\). **Results** **CD1d-dependent NKT cells directly affected the outcome of allogeneic skin graft transplantation** To address the direct role of CD1d-dependent NKT cells on allograft rejection, we compared the survival of whole-thickness skin grafts (B6 male) between wild-type B6 and B6.CD1d\(^{+/+}\) female recipient mice, where male H-Y Ag could elicit graft rejection in the recipients (28). B6.CD1d\(^{+/+}\) mice rejected H-Y different skin grafts earlier than wild-type B6 mice (mean survival time: 24 vs 30 days, Fig. 1A). In B6.CD1d\(^{+/+}\) mice, skin grafts were rejected rapidly, showing early scab formation and complete graft loss, whereas grafts in wild-type B6 mice were rejected gradually, showing shrinkage and fibrosis. The shortened graft survival in B6.CD1d\(^{+/+}\) mice was restored by the adoptive transfer of normal lymphocytes containing NKT cell populations. When \(1 \times 10^6\) splenocytes from wild-type B6 mice were transferred into lightly irradiated (600 rad) B6.CD1d\(^{+/+}\) recipients 7 days before transplantation, graft survival time were restored to that of the wild-type mice (Fig. 1B). Similar results were observed when \(3.5 \times 10^6\) hepatic mononuclear cells were transferred into nonirradiated CD1d\(^{+/+}\) mice 3 days before transplantation (Fig. 1B). --- **Table I. Primer sequences** | Gene | Sequence | |------|----------| | IL-4 | Probe 5'-FAM CACAGGAGAAAAGGCCATGCA TAMRA-3' Sense 5'-CATGGCCATTTTT GAA-3' Antisense 5'-CUTTGGCACATCCATCC-3' | | IL-10 | Probe 5'-FAM TGAAGACCCATCCGATGTCG TAMRA-3' Sense 5'-CAGGAGGAGGCGGCGGAA-3' Antisense 5'-ACAGGGGAGAATGATGACA-3' | | IFN-\(\gamma\) | Probe 5'-FAM CTCCAATCTTGGCAATACTCTGAATGATCC TAMRA-3' Sense 5'-AGCAACAGACGGACCAA-3' Antisense 5'-CTGGACCTGGGTTGTTG-3' | | TGF-\(\beta\) | Probe 5'-VIC TGGCTGCTGCTGCTGCTG TAMRA-3' Sense 5'-GCACATCTGGAAACTCTACCAAGA-3' Antisense 5'-GACGCTAAAGAGCCACCTCA-3' | | 18S rRNA | Probe 5'-VIC TGGCTGCGACACAGTGCCCTC TAMRA-3' Sense 5'-CGGCTACACATCTCAAGGAA-3' Antisense 5'-GCTGGAATTACCGCGGCT-3' | The specificity of the immunomodulatory effect of CD1d-dependent NKT cells was further confirmed by treating recipient mice with α-GalCer, a specific activator of CD1d-dependent NKT cells. When we treated wild-type female recipients with α-GalCer before and after transplantation (6 μg/mouse at −7, −4, 0, 3, and 7 days of operation and twice a week), the survival of male skin grafts was prolonged vs nontreated control recipients (α-GalCer-treated B6, 40 ± 9.1 days; vehicle-treated B6, 32 ± 3.9; α-GalCer-treated CD1d−/−, 25 ± 2.4; vehicle-treated CD1d−/−, 24 ± 2.1, Fig. 1C). The effect of α-GalCer administration was limited to wild-type recipients, i.e., it had no effect in CD1d−/− recipients (Fig. 1C). The effect of CD1d-dependent NKT cells on allograft survival was not restricted to B6 background recipients. Survival of grafted BALB/c tail skin on CD1d−/− mice of mixed 129 × BALB/c background (either N3 or N4 in terms of BALB/c background, designated hereafter as BALB/c CD1d−/−N3 or BALB/c CD1d−/−N4) was shortened compared graft survival on heterozygous littermate recipients, and α-GalCer treatment prolonged skin graft survival on heterozygous littermate recipients (Fig. 1D). **CD1d-dependent NKT cells regulated graft rejection induced by a relatively weak Ag** To evaluate the notion that the regulatory effects of CD1d-dependent NKT cells are confined to minor histocompatibility antigenic differences, we investigated the immunodulative capacity of these cell populations. When we grafted MHC-different skin (BALB/c) onto BALB/c background mice, the grafts were rejected within 17 days both by BALB/c CD1d−/− mice and heterozygous littermates, and repeated injections of α-GalCer did not prolong graft survival. Also, complete minor different B10.D2 skins were rejected with the same kinetics in both recipients (data not shown). To address the hypothesis that quantitative antigenic strength might be one of the critical factors affecting the regulatory functions of NKT cells in allograft rejection, we used different congenic mice on the B6 background that bear single minor histocompatibility Ags from BALB.B mice. Since it has been reported that minor histocompatibility Ags have immunologic hierarchy, i.e., H28>H13>h-Y, as defined by cytotoxic assay (29), we used this system to evaluate the regulatory capacity of NKT cells. CD1d-dependent NKT cells affected the graft survival of... H-Y (Fig. 2A), H13 (Fig. 2B), and H-Y and H13 (Fig. 2C) differences, but did not modulate that of H28-different skin grafts (Fig. 2D). In conclusion, we favor the idea that CD1d-dependent NKT cells affect immune responses regarding weak antigenic differences. **Activation and migration of NKT cells during skin transplantation** Because the mechanism through which CD1d-dependent NKT cells modulate graft survival is unclear, we examined the role of CD1d molecules expressed on donor Langerhans cells. Skin grafts from B6.CD1d\(^{-/-}\) male mice were transplanted onto wild-type B6 female mice. In this case, we found that the expression of CD1d molecules on donor skin cells was not required for NKT cell activation (Fig. 3A). Since immune response during skin graft rejection is mostly confined within the draining LNs, the migration of NKT cells into the draining LNs was assessed using flow cytometric analysis. The absolute numbers of NKT cells were increased in the draining axillary LNs after transplantation (Fig. 3B), and NKT cell migration into the skin graft was confirmed using Vα14-specific RT-PCR (Fig. 3C) and real-time quantitative RT-PCR (data not shown). Indeed, we were able to detect Vα14-specific mRNA from grafted skin, but not from control skin. In addition, Vα14-specific mRNA was detected in the donor skin of B6.CD1d\(^{-/-}\) mice, suggesting that CD1d-dependent NKT cells from recipients migrated into the donor skin graft, even in the absence of CD1d molecules in donor skin cells. **Regulatory effects of CD1d-dependent NKT cells are IL-10 dependent** To evaluate the notion that CD1d-dependent NKT cells contribute to the regulation of allograft survival by changing immunosuppressive cytokine profiles, we measured the mRNA expression of draining LNs from recipient mice. In a skin graft model of B6 male to B6 female or B6.CD1d\(^{-/-}\) female mice, the transcription level of IL-10 was found to be higher in B6 recipients than in CD1d\(^{-/-}\) B6 recipients (Fig. 3D), and similar patterns of cytokine milieu were confirmed by real-time quantitative RT-PCR using the TaqMan system (data not shown). After multiple injections of α-GalCer, the cytokine profile was exaggerated showing IL-10 up-regulation and minimal IFN-γ expression (Fig. 4). Blocking IL-10 pathways shortened the survival times of skin grafts in wild-type mice and prevented the prolongation of graft survival induced by α-GalCer treatment (Fig. 5). **Discussion** In the current study, we undertook to examine the distinctive role of NKT cells in alloimmune responses using a murine transplantation model. We found that CD1d-dependent NKT cells affect allogeneic skin graft survival and that this immunoregulatory property of CD1d-dependent NKT cells depends on the antigenic strength of the transplantation barrier. Furthermore, the migration of NKT cells into skin grafts and the secretion of cytokines were identified as mechanistic routes for the down-regulation of alloimmunity. CD1d-dependent NKT cells have been identified as a novel lymphocyte lineage and are characterized by the expression of an invariant Vα14 Ag receptor and NK1.1 marker (30, 31). These cells play critically distinctive roles in immune responses such as in the maintenance of peripheral tolerance (32), transplantation tolerance (15), and protection from autoimmune disease (13). However, the direct role of CD1d-dependent NKT cells in a murine allograft model was uncertain, and differential immunomodulation capacities according to the degree of antigenicity have not been exploited thoroughly. The present study shows that the alloantigenic differences in which CD1d-dependent NKT cells can affect the outcome of skin graft survival are confined to several minor differences. The --- **FIGURE 2.** CD1d-dependent NKT cells regulate the graft rejection induced by a relatively weak Ag. Wild-type B6 female and B6.CD1d\(^{-/-}\) female mice were grafted with B6 male skin (A), B6.H13 female skin (B), B6.H13 male skin (C), and B6.H28 female skin (D). Some recipients were injected i.p. with 6 μg of α-GalCer or vehicle, twice per week during the transplantation period, starting 7 days before transplantation. H13- and H28-congenic mice carry a minor Ag of BALB/c origin. Five to six mice per group were used. Experiments were repeated three times with similar results. FIGURE 3. Activation and migration of NKT cells during skin transplantation. A, The activation of NKT cells does not require donor CD1d molecule expression. Wild-type B6 female mice were grafted with wild-type B6 male and B6.CD1d$^{-/-}$ male skin. B6.CD1d$^{-/-}$ female mice were grafted with wild-type B6 male and B6.CD1d$^{-/-}$ male skin. Five to six mice per group were used. Experiments were repeated three times with similar results. B, The number of NKT cells in the draining LNs was increased by allogeneic skin transplantation and α-GalCer treatment. Wild-type B6 female mice were injected i.p. with 6 μg of α-GalCer or vehicle at $-7$, $-4$, 0, 3, and 7 days postoperation. Some mice were grafted with B6 male skin. Draining axillary LN cells were harvested from grafted and nongrafted mice 6 h after the last injection of α-GalCer on day 7 following the skin transplantation, stained for αTCR and NK1.1, and analyzed by flow cytometry. The number of NKT cells was calculated by counting αTCR$^+$NK1.1$^+$ cells. The data shown represent one of five independent experiments. C, NKT cells migrate into grafted skin. B6 female mice were grafted with wild-type B6 male skin or B6.CD1d$^{-/-}$ male skin. Grafted skins were removed 7 days after skin transplantation. The frequencies of skin-infiltrating NKT cells were estimated by RT-PCR for TCR Vα14 expression. Lane 1, Nongrafted control; lane 2, B6.CD1d$^{-/-}$ male graft; lanes 3 and 4, wild-type B6 male graft. These data represent one of five independent experiments. D, Wild-type B6 female and B6.CD1d$^{-/-}$ female mice were grafted with wild-type B6 male skin. Recipients were sacrificed on days 8 and 10 after skin transplantation. Total RNA was extracted from spleen cells and the mRNA expression levels of IL-10 and TGF-β1 were measured. One microgram of total RNA was reverse-transcribed into cDNA. Three microliters of cDNA was used for PCR. The data shown are representative of five independent experiments. regulatory capacity of NKT cells, however, may not be trivial considering that skin grafts are considered to be highly vulnerable to rejection compared with pancreas islet or cardiac grafts (33). Skin grafts provide the strongest immune stimulus because of differences in the mode of vascularization, the presence of tissue-specific Ags, the number of APCs in the graft, and the graft size. Previous reports regarding the role of CD1d-dependent NKT cell populations in allo- or xenogenic transplantation have shown that the beneficial effects for grafts are mediated either with the aid of costimulatory molecular blockades or in the lymphopenic hosts where aberrant activation of lymphocytes occurs (15, 16, 34). Also, in clinical transplantation where minor histocompatibility Ags are the major targets of chronic rejection and graft-versus-host disease, the regulatory capacity of NKT cells over minor histocompatibility antigenic differences might affect the outcome of the disease process. In our experiments, NKT cells were activated by CD1d molecules expressed on the surface of host APCs. However, the ligand required for NKT cell activation during the transplantation process is unknown. As in hapten-mediated contact dermatitis (35), mediators from the grafted skin could affect the remote NKT cell population by some uncharacterized pathway or rather the process of transplantation itself might act as a so-called “danger” signal that delivers an alarm to hosts and thus activates NKT cells. These two possibilities are not mutually exclusive. In the case of tissue damage due to injury, surgery, or others, many leukocytes are recruited into the injured site to remove tissue debris and aid the healing process. But an excessive response must be controlled to prevent an overwhelming response by potentially harmful activated leukocytes. Several lipid molecules produced during tissue damage have been suggested to behave as an endogenous danger signal for the immune system (36). Another possibility is that the endogenous self-ligand might be differentially presented to NKT cells in a CD1d-dependent manner and thus initiate the activation of NKT cells through endogenous self-ligand (37, 38). In our model, CD1d-dependent NKT cells migrated into draining LNs and target tissue, thus NKT cells could intimately affect alloreactive T cells during the initiation and effector phases of alloimmune response. We propose that the modes of immune regulation during allotransplantation are similar between CD1d-dependent NKT cells and regulatory CD4$^+$CD25$^+$ T cells (21). In an attempt to evaluate whether the migration of CD1d-dependent NKT cells is associated with the rejection of allogeneic grafts, we measured mRNAs specific for Vα14. As shown in Fig. 3C, we were able to detect mRNAs specific for Vα14 from the skin grafts regardless of the presence of CD1d molecules in the skin. Indeed, mRNAs for Vα14 were detectable in the grafts from CD1d$^{-/-}$ donors. This result suggests that the recipient CD1d-dependent NKT cells migrate into grafts and is consistent with the FIGURE 4. The regulatory effects of CD1d-dependent NKT cells were primarily IL-10 dependent. Wild-type B6 female and B6.CD1d$^{-/-}$ female mice with no graft (A) and with B6 male skin grafts (B) were injected i.p. with 6 μg of α-GalCer or vehicle at −7, −4, 0, 3, and 7 days postoperation. Grafted and nongrafted mice were sacrificed 6 h after a final injection of α-GalCer 7 days after skin transplantation. Total RNA was extracted from splenic cells. Each RNA sample was tested for expression for real-time cytokine mRNA expression. One microgram of total RNA was reverse-transcribed into cDNA and tested by real-time PCR for IL-10 and IFN-γ. The relative expression levels of cytokines were normalized vs 18S rRNA. The data shown are representative of three independent experiments. FIGURE 5. Wild-type B6 female mice were grafted with wild-type B6 male skin. Some recipients were injected i.p. with 6 μg of α-GalCer and/or 50 μg of anti-IL-10 blocking mAb (JES5-2A5) twice per week during the transplantation period, starting 7 days before the operation. Isotype-matched Ab was used in the controls. Five to six mice per group were used. This experiment was repeated three times with similar results. finding that NKT cells act in peripheral tissue rather than in secondary lymphoid organs, because of the higher chemokine receptor expression on their surfaces (39). Cytokine production from CD1d-dependent NKT after multiple α-GalCer injection showed a similar pattern to that of other regulatory T cells (40). Of the various cytokines, IL-10 is known to inhibit cytokine production from T cells, to exert anti-inflammatory and suppressive effects on most hemopoietic cells, and to be involved in the induction of peripheral tolerance via affects on T cell-mediated responses (23). IL-10 indirectly suppresses T cell responses by potently inhibiting the Ag-presenting capacity of APCs, which include dendritic cells (41), Langerhans cells, and macrophages (25). We found that multiple α-GalCer injections raised IL-10 and TGF-β production (our unpublished data) in wild-type mice, but not in CD1d$^{-/-}$ mice. Moreover, blocking the IL-10 pathway inhibited the beneficial effect of NKT cells on graft survival. When we measured cytokine levels after α-GalCer treatment, either in combination with blocking anti-IL-10 Ab or using IL-10-deficient mice, IL-10 secretion was found to have increased (our unpublished data), which is contrary to that reported previously (42). Thus, Th2 deviation after multiple α-GalCer injections may explain the regulatory effects of NKT cells in our study. In fact, when we grafted BALB/c islet grafts into B6 mice (fully MHC mismatched; H-2$^b \rightarrow$ H-2$^{b*}$), islet survival was increased from 10 to 30 days by repeated α-GalCer injection (our unpublished data). The BALB/c strain is likely to produce Th2-associated cytokines, and in the case of *Leishmania* infection could not provide protective immunity due to insufficient Th1 development (43). On the B6 background, multiple injections of α-GalCer shifted the cytokine profile toward a Th2 pattern. Large amounts of IL-10 and TGF-β were also previously demonstrated in NOD and B6 experimental autoimmune encephalomyelitis models (13, 14). On the BALB/c background, however, cytokine secretion was predominantly Th2-like, even after a single injection of α-GalCer (our unpublished data). This may be one of the reasons why the protective effects of NKT cells and α-GalCer were more pronounced on the BALB/c background. Although our studies performed with CD1d$^{-/-}$ recipients cannot formally exclude a contribution by CD1d-dependent non-NKT cell populations (5), our data obtained with α-GalCer clearly shows the involvement of CD1d-dependent NKT cells in improved graft survival. We are currently investigating the possibility of promoting another population of regulatory T cells by activating CD1d-dependent NKT cells in an allogeneic transplantation environment. To our knowledge, this is the first report that distinct NKT cells may lead to the development of differential effects on alloimmune responses and that their stratified regulatory capacities are related to alloantigenic strength. We believe that an understanding of the manner in which these cells work will have implications for the induction of donor-specific allograft tolerance and suggest bases for cell therapy as dependable therapeutic modalities. Acknowledgments We are grateful to Dr. Keith K. C. Choi and Michelle M. M. Woo at the British Columbia Research Institute for Children’s and Women’s Health, University of British Columbia, and Dr. Charles D. Surh at The Scripps Research Institute for critical reviews of this manuscript. References 1. Yoshimoto, T., and W. E. Paul. 1994. CD4pos NK1.1pos T cells promptly produce interleukin 4 in response to in vivo challenge with anti-CD3. J. Exp. Med. 179:250. 2. Bendelac, A., M. N. Rivera, S. H. Park, and J. H. Roark. 1997. Mouse CD1-specific NK1 T cells: development, specificity, and function. Annu. Rev. Immunol. 15:533. 3. Percec, A., and R. L. Modlin. 1999. The CD1 system: antigen-presenting molecules for T cell recognition of lipids and glycolipids. Annu. Rev. Immunol. 17:297. 4. Godfrey, S. J., K. J. Hammond, L. E. Poulton, M. J. Smyth, and N. G. Baxter. 2000. NK1 T cell factor: biology and faltering Immunity. Trends Immunol. 21:570. 5. Kronenberg, M., and L. Gapin. 2002. The unconventional life of NK1 T cells. Nat. Rev. Immunol. 2:537. 6. Singh, A. K., M. Wilson, S. Hong, D. Olivares-Villagomez, C. Du, A. K. Sato, S. Joyce, S. Sriram, Y. Kozueka, and L. Van Kaer. 2001. Natural killer T cell activation protects mice against experimental autoimmune encephalomyelitis. J. Exp. Med. 194:1801. 7. Cui, J. T., M. S. Kwasnica, K. Sato, E. Kondo, I. Toura, Y. Kaneko, H. Koseki, M. Kanno, and M. Taniguchi. 1997. Requirement for Vo14 NK1 cells in IL-12-mediated rejection of tumors. Science 278:1623. 8. Smyth, M. J., V. T. Chia, S. A. Smith, E. Cresswell, J. A. Trapani, M. Taniguchi, T. Kawabata, B. Pellicci, Y. Y. Chung, and D. J. Thomas. 2000. Differential tumor surveillance by natural killer (NK) and NK1 T cells. J. Exp. Med. 191:661. 9. Kakimi, K., L. G. Guidotti, Y. Kozueka, and F. V. Chisari. 2000. Natural killer T cell activation inhibits hepatitis B virus replication in vivo. J. Exp. Med. 195:923. 10. Taniguchi, M., K. Seino, and T. Nakayama. 2003. The NK1 T cell system: bridging innate and adaptive immunity. Nat. Rev. Immunol. 3:4164. 11. Mizuta, A., T. Hirai, J. O. C. Y. Makino, M. Nakano, K. Tsuchida, T. Kojke, T. Shira, H. Yagita, A. Matsumawa, et al. 1996. Selective reduction of Va14+ NK T cells associated with disease development in autoimmune-prone mice. J. Immunol. 156:4035. 12. Shin, D. M., M. Taniguchi, B. Balasa, S. H. Kim, K. Van Gunst, J. L. Strominger, S. B. Wilson, and N. Savirnytch. 2001. Germ line deletion of the CD1 locus exacerbates diabetes in the NOD mouse. Proc. Natl. Acad. Sci. USA 98:6777. 13. Hong, S., M. T. Wilson, I. Serizawa, L. Wu, N. Singh, O. V. Naidenko, T. Miura, T. Hahm, S. Scheerer, J. Wei, et al. 2001. The natural killer T-cell ligand galactosylceramide is expressed on insulin-producing islets in nonobese diabetic mice. Nat. Med. 7:1032. 14. Sharif, S., and J. Delovitch. 2001. Regulation of immune responses by natural killer T cells. Arch. Immunol. Ther. Exp. 49(Suppl. 1):S23. 15. Ichikura, Y., Y. Yasumizu, S. Kodama, T. Maki, M. Nakano, T. Nakayama, M. Taniguchi, and S. Ikeda. 2000. CD1a Vo1 natural killer T cells are essential for acceptance of skin xenografts in mice. J. Clin. Invest. 106:791. 16. Seino, K., T. H. Fukao, T. Muramatsu, K. Yamamoto, Y. Takada, S. Kakuta, Y. Iwakura, L. Van Kaer, K. Takeda, T. Nakayama, et al. 2001. Requirement for natural killer T (NKT) cells in the induction of allograft tolerance. Proc. Natl. Acad. Sci. USA 98:1340. 17. Seino, K., and J. Stein-Streilein. 2002. CD1d on antigen-transporting APC and splenic marginal zone B cells promotes NKT cell-dependent tolerance. Eur. J. Immunol. 32:840. 18. Fainess, K., K. H. Seino, and J. Stein-Streilein. 2001. MIP-2 recruits NKT cells to the spleen during tolerance induction. J. Immunol. 165:313. 19. Faunce, D. E., and J. Stein-Streilein. 2002. NKT cell-derived RANTES recruits APCs and CD8+ T cells to the spleen during the generation of regulatory T cells in tolerance. J. Immunol. 169:31. 20. Graca, L., S. P. Cobbold, and H. Waldmann. 2002. Identification of regulatory T cells in tolerated allografts. J. Exp. Med. 195:1641. 21. Sanchez-Fuego, A., M. Weber, C. Domengen, T. B. Stroom, and X. X. Zheng. 2002. Tracking the immunoregulatory mechanisms active during allograft tolerance. J. Immunol. 168:2274. 22. Surh, C. D., D. S. Lee, W. P. Fung-Leung, K. Larsson, and J. Sprent. 1997. Thy1.2 selection by a single MHC/peptide ligand produces a semidiverse repertoire of T cells. J. Exp. Med. 185:759. 23. Lee, D. S., C. Ahn, B. Ermot, J. Sprent, and C. D. Surh. 1999. Thy1.2 selection by a single MHC/peptide ligand: autoreactive T cells are low-affinity cells. Immunity 10:39. 24. Kim, S. S., S. Song, T. Lee, S. Jung, and D. Kim. 2004. Practical synthesis of KRAS from phytosphingosine. Synthesis 847. 25. Rugo, H. S., P. O’Hanley, A. G. Bishop, M. K. Pearce, J. S. Abrams, M. Howard, and A. O’Garra. 1992. Local cytokine production in a murine model of Escherichia coli septic arthritis. J. Clin. Invest. 89:1032. 26. O’Garra, A., and M. Howard. 1992. IL-10 production by CD5 B cells. Ann. NY Acad. Sci. 657:182. 27. Xia, D., A. Sanders, M. Shah, A. Bickerstaff, and C. Orozco. 2001. Real-time polymerase chain reaction analysis reveals an evolution of cytokine mRNA production in graft-versus-host disease. Transplantation 72:907. 28. Simpson, C., A. McLaren, and P. Chandler. 1985. Evidence for two male antigens in mice. Immunogenetics 15:69. 29. Choi, E. Y., Y. Yoshimura, G. J. Christensen, J. T. Spalding, S. Mukundan, N. Shastri, S. Joyce, and M. Taniguchi. 2001. Quantitative analysis of the immune response to mouse non-MHC transplantation antigens in vivo. The H60 histocompatibility antigen dominates over all others. J. Immunol. 166:4370. 30. Lantz, O., and A. Benedito. 1994. An invariant T cell receptor α chain is used by CD4+ T cells to recognize a polymorphic major class I-specific CD4+ and CD4−T cells in mice and humans. J. Exp. Med. 180:1095. 31. Makino, Y., R. Kanno, T. Ito, K. Higashino, and M. Taniguchi. 1995. Predominant expression of invariant Vo14 TCR α chain in NKL1+ T cell populations. Int. Immunol. 7:101. 32. Sonoda, K. H., M. Exley, S. Snapper, S. P. Ball, and J. Stein-Streilein. 1999. CD1-reactive natural killer T cells are required for development of systemic tolerance through an immune-privileged site. J. Exp. Med. 190:1215. 33. Jones, J. D., S. E. Turvey, A. Van Maurik, M. Hara, C. I. Kingsley, C. H. Smith, M. Stein-Streilein, J. P. Morris, and K. J. Wood. 2001. Differential susceptibility of heart, skin and islet allografts to T cell-mediated rejection. J. Immunol. 166:2824. 34. Soneda, K. H., M. Taniguchi, and J. Stein-Streilein. 2002. Long-term survival of cardiac allografts is dependent on intact CD1-reactive NKT cells. J. Immunol. 168:2028. 35. Cavanu, A., O. C., F. Nasorri, S. Sebastiani, and G. Girolomoni. 2003. Immunological tolerance of drug and indirect immune reactions. Curr. Opin. Allergy Clin. Immunol. 3:342. 36. Seong, S. Y., and P. Matzinger. 2004. Hydrophobicity: an ancient damage-associated molecular pattern that initiates innate immune responses. Nat. Rev. Immunol. 4:469. 37. Miyoshi, L., M. Chioda, N. Burdin, Y. Kozueka, G. Casorati, P. Dellabona, and M. Kronenberg. 1998. CD1d-mediated recognition of α-galactosylceramide by natural killer T cells is highly conserved through mammalian evolution. J. Exp. Med. 188:1521. 38. Park, S., E. H. Kwon, and A. Bendelac. 1998. Tissue-specific recognition of mouse CD1d molecules. J. Immunol. 160:1128. 39. Thomas, S. Y., R. Hou, J. E. Boyson, T. K. Means, C. Hess, D. P. Olson, J. L. Strominger, M. B. Brenner, J. E. Gumperz, S. B. Wilson, and A. D. Luster. 2003. CD1d-restricted NKT cell responses to chemokines correlate with profile indicative of Th1-like inflammation in lung cells. J. Immunol. 171:2571. 40. Jonuleit, H., and E. Schmitt. 2003. The regulatory T cell family: distinct subsets and their interrelations. J. Immunol. 171:8323. 41. Prud’homme, G. J., D. H. Kono, and A. N. Theofilopoulos. 1995. Quantitative interferon changes in lupus-prone mice: correlation with overexpression of interleukin 1β, interleukin-1 and interferon-γ mRNA in the lymph nodes of lupus-prone mice. Mol. Immunol. 32:495. 42. Chen, H., and W. E. Paul. 1997. Cultured NKL1.1+ CD4+ T cells produce large amounts of IL-4 and IFN-γ upon activation by anti-CD3 or CD1. J. Immunol. 159:2240. 43. Beebe, A. M., S. Mauze, N. J. Schork, and R. L. Coffman. 1997. Serial backcross mapping of multiple loci associated with resistance to Leishmania major in mice. Immunity 6:551.
ORDER THIS MATTER is before the Court on the Bill of Costs (DE# 166, 8/11/17) and the Supplemental Bill of Costs (DE# 189, 10/2/17)\(^1\) filed by the defendants. BACKGROUND On July 13, 2017, the jury rendered a verdict in favor of the defendants and against the plaintiff. See Verdict (DE# 140, 7/13/17). On the same day, the Court entered a final judgment in accordance with the verdict. See Final Judgment (DE# 141, 7/13/17). The defendants now seek to recover costs pursuant to Title 28, United States Code, Section 1920 and Rule 54(d)(1) of the Federal Rules of Civil Procedure. See Memorandum in Support of Defendants, The Mason and Dixon Lines, Incorporated and Timothy Leverett's, Bill of Costs (DE# 167 at 1, 8/11/17). \(^1\) The defendants' initial Bill of Costs (DE# 166) sought to recover $18,615.65 in costs. The defendants' Supplemental Bill of Costs (DE# 189) reduced this number to $18,337.19. The defendants did not provide an explanation for this reduction. ANALYSIS 1. ENTITLEMENT Rule 54(d)(1) of the Federal Rules of Civil Procedure provides that costs other than attorneys' fees shall be allowed to the prevailing party unless the court otherwise directs. Fed. R. Civ. P. 54(d)(1). A "prevailing party," for purposes of the rule, is a party in whose favor judgment is rendered. See Util. Automation 2000, Inc. v. Choctawhatchee Elec. Co-op., Inc., 298 F.3d 1238, 1248 (11th Cir. 2002). In the instant case, the Court entered a judgment in favor of the defendants and against the plaintiff. See Final Judgment (DE# 141, 7/13/17). As such, the defendants are the prevailing party and are entitled to recover taxable costs. 2. ABILITY TO PAY The plaintiff argues that the defendants' request for costs should be denied in its entirety because the plaintiff does not have the ability to pay those costs. See Plaintiff's Response and Objections to Defendants' Bill of Costs (DE# 166) (DE# 184 at 9, 9/25/17). In support of this argument, the plaintiff has filed an affidavit. See Affidavit of Alba Cardona in Opposition to Defendant's Bill of Costs and Defendant's Memorandum of Law in Support of Same (DE# 186-1, 9/25/17) (hereinafter "Plaintiff's Affidavit"). In her affidavit, the plaintiff attests that she is 74 years old and on a fixed monthly income of $851.00 which she receives from the Social Security Administration. Id. at ¶ 5. She further attests that she has $900.00 in her bank account and no other accounts. Id. at ¶ 4. The plaintiff also states that she relies on "family tenants" to help pay her mortgage and has no additional assets or other sources of income. Id. at ¶ 7. The plaintiff concludes that she is "not in an economic position to satisfy any costs in this matter . . . ." *Id.* at 9. The defendants respond that the Court should not consider the plaintiff's financial state in awarding costs to the defendants. Defendants' Reply to Plaintiff's Response and Objections to Defendants' Bill of Costs (DE# 188 at 9, 10/2/17). The defendants note that the plaintiff's affidavit is not sufficiently detailed and that the plaintiff could pay costs over time. *Id.* The Court finds that there is no justification to reduce a cost award based solely on the plaintiff's alleged inability to satisfy a judgment. *See Mathews v. Crosby,* 480 F.3d 1265, 1276-77 (11th Cir. 2007) (affirming an award of costs despite a claim of indigence because the district court had no "sound basis to overcome the strong presumption that a prevailing party is entitled to costs") (citing *Chapman v. Al Transp.*, 229 F.3d 1012, 1023-24 (11th Cir. 2000)). In the instant case, there is an insufficient showing for the Court to conclude that the plaintiff is unable to pay the award of costs. The plaintiff does not specify how much her tenants contribute towards her mortgage or list her monthly expenses. "This Court requires substantial documentation of a true inability to pay for [it to] reduce the amount of costs to be paid, and may not decline to award any costs at all." *Perez v. Saks Fifth Ave., Inc.*, No. 07-21794-CIV, 2011 WL 13172510, at *11 (S.D. Fla. Feb. 14, 2011). 3. TAXABLE COSTS Title 28, United States Code, Section 1920 sets out the specific costs that may be recovered: A judge or clerk of any Court of the United States may tax as costs the following: (1) Fees of the clerk and marshal; (2) Fees for printed or electronically recorded transcripts necessarily obtained for use in the case; (3) Fees and disbursements for printing and witnesses; (4) Fees for exemplification and the costs of making copies of any materials where the copies are necessarily obtained for use in the case; (5) Docket fees under section 1923 of this title; (6) Compensation of court appointed experts, compensation of interpreters, and salaries, fees, expenses, and costs of special interpretation services under section 1828 of this title. 28 U.S.C. § 1920. In the exercise of sound discretion, trial courts are accorded great latitude ascertaining taxable costs. However, in exercising its discretion to tax costs, absent explicit statutory authorization, federal courts are limited to those costs specifically enumerated in 28 U.S.C. § 1920. *E.E.O.C. v. W&O, Inc.*, 213 F.3d 600, 620 (11th Cir. 2000). Accordingly, the defendants may only recover those costs they are entitled to recover under 28 U.S.C. § 1920. a. **Fees of the Clerk** The defendants seek to recover $400.00 paid to the Clerk of the Court as a filing fee. This amount was incurred when the defendants removed the case from state court to this Court. Section 1920(1) permits the recovery of “[f]ees of the clerk and marshal,” 28 U.S.C. § 1920(1). The plaintiff does not dispute this amount. See Plaintiff’s Response and Objections to Defendants’ Bill of Costs (DE# 166) (DE# 184 at 3, 9/25/17). Accordingly, the Court will allow the defendants to recover $400.00 for filing fees. b. Fees for Service of Summons and Subpoena The defendants seek to recover $8,051.50 for fees incurred in the service of summonses and subpoenas. At the outset, the undersigned notes that there appears to be a $58.00 discrepancy between the amount claimed on the Supplemental Bill of Costs (DE# 189) ($8,051.50) and the amount calculated by adding the itemized costs in Exhibit “A” of the defendant’s reply ($7,993.50). Accordingly, the Court will start with the lower number. The plaintiff objects to the award of costs on the ground that “Defendants . . . fail to show that these fees for subpoenas (mostly for discovery subpoenas, apparently) are taxable, or that they were reasonable and necessary for use in the case.” Plaintiff’s Response and Objections to Defendants’ Bill of Costs (DE# 166) (DE# 184 at 3-4, 9/25/17) (citation and footnote omitted). In their reply, the defendants explain that “[t]he majority of the subpoenas were issued to third parties for records relating to Plaintiff and her alleged injuries” and note that “[t]he costs of obtaining medical records in a personal injury case are clearly allowable under Rule 54(d) since they were necessarily obtained for use in the case.” Defendants’ Reply to Plaintiff’s Response and Objections to Defendants’ Bill of Costs (DE# 188 at 3, 10/2/17). Private process server fees may be taxed. E.E.O.C., 213 F.3d at 623. The Court finds that the service of some of these subpoenas were necessary. The plaintiff was involved in a traffic accident and sustained serious injuries. The nature and extent of the plaintiff’s injuries were issues in the case. However, certain reductions to the costs sought in this category are necessary for the reasons stated below. 1. Multiple Attempts at Service The plaintiff further states that: [i]t is not clear – and Defendants fail to explain – why they served subpoenas on approximately 80 different entities (sometimes the same entity was served multiple times or at various addresses), which are not shown to be necessary or reasonable. Indeed, in excess of 33 of the entities subpoenaed do not appear anywhere in the parties’ disclosures or witness/exhibit lists. Plaintiff’s Response and Objections to Defendants’ Bill of Costs (DE# 166) (DE# 184 at 4, 9/25/17) (footnote omitted). The plaintiff lists the 33 entities by name in footnote 2 of her motion. Id. at 4 n.2. The plaintiff further notes that “at least a dozen entities were subpoenaed multiple times” and again lists those entities in a footnote. Id. at 5, 5 n.3. The plaintiff argues that these duplicative subpoenas were unnecessary and not reasonable for use in the case. Id. at 5. The plaintiff also argues that the defendants should not be awarded costs for multiple attempts to serve the subpoenas at additional addresses and identifies these multiple attempts at service by invoice number. Id. at 5, 5 n.4. The defendants state that some “providers or facilities actually had different locations or had moved from previous locations or had different addresses for billing records only or providers had left the prior facilities and moved to new facilities” and that it was necessary “to serve additional subpoenas upon some of the providers and facilities in order to obtain updated records from Plaintiffs continued treatment.” Defendants’ Reply to Plaintiff’s Response and Objections to Defendants’ Bill of Costs (DE# 188 at 3, 10/2/17). Exhibit “A” to the defendants’ reply provides more explanations as to the subpoenas served in the instant case. *Id.* at 11-43.\(^2\) The Court finds that, in some instances, the defendants have not shown that multiple attempts to serve the same provider were necessary. For example, on October 28, 2016, the defendants served Family Medical Clinic Group with subpoenas at similar addresses (3785 W. Flagler Street) and (3485 W. Flagler Street). On November 2, 2016, the defendants served subpoenas on the Sunshine Wellness Clinic Corporation at two separate addresses. On December 9, 2016, the defendants served Donald L. Caress, M.D. at similar addresses (NE 25th Street and NW 25th Street). On December 9, 2016, the defendants served Julio Cruz, M.D. (Concentra Medical Center) and Julio Cruz, M.D. at the same address. On December 14, 2016, the defendants served Ingrid M. Mixter, M.D. with two subpoenas at separate addresses. It appears that these expenses could have been avoided had the service provider been contacted and the correct address been verified prior to the service of the subpoenas. The plaintiff should not bear those costs. In total, the Court finds that $1,862.00 constitutes duplicative and/or unnecessary service fees and will not allow the defendants to recover this amount. 2. **Rush Service** The plaintiff also argues that the defendants should not be permitted to recover costs incurred for rush service of subpoenas and identifies several invoices where rush \(^2\)The defendants subsequently filed a Supplemental Bill of Costs (DE# 189, 10/2/17) which includes these additional explanations. The plaintiff did not respond to the Supplemental Bill of Costs (DE# 189, 10/2/17). Where applicable, the Court will apply the plaintiff's objections raised in response to the original Bill of Costs (DE# 166, 8/11/17) to the Supplemental Bill of Costs (DE# 189, 10/2/17). service was billed. Plaintiff’s Response and Objections to Defendants’ Bill of Costs (DE# 166) (DE# 184 at 5, 5 n.5, 9/25/17). The defendants state that they incurred rush service charges due to the discovery cutoff. See Exhibit “A” (DE# 188 at 28). The Court finds that there were no extraordinary circumstances in this matter requiring expedited service. As such, rush service fees will not be awarded. The Court calculates the additional fee for rush service in the instant case to be $22.50 ($80.00 minus $57.50) per subpoena. The Court has already eliminated some of the “rush service” fees by disallowing costs for the service of duplicative subpoenas (some of which also included rush service fees). Rush service for the remaining subpoenas totals $247.50. Accordingly, the Court will disallow $247.50 for rush service. 3. Service of Own Experts Lastly, the plaintiff argues that the defendants should not be allowed to recover for the service of subpoenas on their own experts. Plaintiff’s Response and Objections to Defendants’ Bill of Costs (DE# 166) (DE# 184 at 5, 9/25/17). The defendants maintain that it was necessary to serve subpoenas on their own experts (Gregory C. Keller, M.D., Julianne Frain, Ph.D. and Linda Weseman, P.E.) because “so that if for some reason . . . said witnesses had an emergency and could not attend the trial, Defendants would have grounds for a continuance.” Defendants’ Reply to Plaintiff’s Response and Objections to Defendants’ Bill of Costs (DE# 188 at 4, 10/2/17). The defendants reason that “[i]f [its] experts were not under subpoenas, the grounds for a continuance would be waived.” Id. The defendants cite no authority for this proposition. The Court will not allow the defendants to recover for the costs incurred in the service of their three experts. Accordingly $225.00 for the service of subpoenas on Dr. Keller ($65.00), Dr. Frain ($80.00) and Ms. Weseman ($80.00) will be disallowed. In sum, the Court will allow the defendants to recover $5,659.00 ($7,993.50 minus $1,862.00 minus $247.50 minus $225.00) for the service of subpoenas. c. Fees for Printed or Electronically Recorded Transcripts Necessarily Obtained for Use in the Case The defendants seek to recover $2,142.95 for ordering the deposition transcripts of Alba Cardona, Timothy Leverette, Dan Kepple, Apryl Hall, Ronald DeMeo M.D., Lawrence Alexander M.D. and Julio Robia, M.D. See Defendants' Reply to Plaintiffs Response and Objections to Defendants' Bill of Costs (DE# 188 at 5, 10/2/17). The Court notes that the total fees itemized in Exhibit "B" of the Reply (DE# 188 at 45-47) is $1,954.15. The Court will start with this amount. The defendants argue that it necessarily incurred these costs because “these witnesses were listed on Plaintiffs Fed.R.Civ.P. Rule 26(a)(1) & (2) Initial Disclosures (DE #24)” and “[a]s such, Defendants could reasonably expect these witnesses to testify at trial.” Defendants' Reply to Plaintiff's Response and Objections to Defendants' Bill of Costs (DE# 188 at 5, 10/2/17). The plaintiff argues that the defendants have failed to show how the fees it incurred for printed or electronically recorded transcripts were necessary. See Plaintiff's Response and Objections to Defendants' Bill of Costs (DE# 166) (DE# 184 at 6, 9/25/17). The plaintiff specifically states that the depositions of Mr. Kepple and Ms. Hall\(^3\) were unnecessary because they were not used at trial. Id. The plaintiff also argues that \(^3\) Mr. Kepple and Ms. Hall were the 30(b)(6) witnesses for the two corporate defendants. See Defendants' Reply to Plaintiff's Response and Objections to Defendants' Bill of Costs (DE# 188 at 6, 10/2/17). the costs for obtaining condensed transcripts are not taxable. *Id.* Lastly, the plaintiff argues that the charge of $340.25 for exhibits is not taxable. *Id.* The defendants respond that it was necessary to order the deposition transcripts of Mr. Kepple and Ms. Hall because the plaintiff took those depositions and “[n]aturally, Defendants requested a copy of Plaintiffs original transcript.” Defendants’ Reply to Plaintiff’s Response and Objections to Defendants’ Bill of Costs (DE# 188 at 5, 10/2/17). The defendants further argue that deposition transcript costs are still taxable even if a deposition is not used at trial. *Id.* The defendants state that “several of the depositions were used and relied upon by Defendants at summary judgment,” but fail to specify which deposition transcripts were necessary to the summary judgment motion. *Id.* The courts have interpreted section 1920 to include only those costs that are “necessarily obtained for use in the case.” *EEOC*, 213 F.3d. at 620-21 (noting that costs of deposition transcripts were, either wholly or partially “necessarily obtained for use in the case.”). Whether transcripts have been “necessarily obtained for use in the case” or merely for the convenience of counsel, is to be determined on a case-by-case basis. See e.g. *Desisto Coll., Inc. v. Town of Howey-in-the-Hills*, 718 F.Supp. 906, 913 (M.D. Fla. 1998). The Court will allow the defendants to recover the costs incurred for the deposition transcripts for the listed witnesses. See *EEOC*, 213 F.3d at 621 (“deposition costs [are] allowable where there is no evidence that the depositions were not related to an issue in the case when the depositions were taken”). Additionally, the Court will allow the defendants to recover for the costs of the corresponding exhibits. The Court finds that they were necessary for counsel's preparation in this case. However, the Court will disallow the costs incurred for ordering condensed copies of the transcripts because those items were ordered for the convenience of counsel and were not necessary for use in the case. Accordingly, the defendants are permitted to recover $1,894.15 ($1,954.15 minus $60.00) for the costs of obtaining deposition transcripts and exhibits. d. Fees and Disbursements for Printing The defendants seek to recover $3,931.74 in printing costs.\(^4\) The plaintiff argues that the defendants have failed to carry their burden of showing how these documents "were used or intended to be used in the case." Plaintiff's Response and Objections to Defendants' Bill of Costs (DE# 166) (DE# 184 at 7, 9/25/17). The plaintiff further states that: Defendants do nothing to identify or show the nature of the "records" produced for the "initial preparation of Defendants' exhibits", and Defendants do nothing to explain how invoices dated 05/08/17 (#33928) and 06/14/17 (#34284) are not duplicative, even though both purport to be "records" for the "initial preparation of Defendants' exhibits." Id. The plaintiff also argues that the "invoice dated 06/30/17 (#3441) is . . . excessive and duplicative" because "[a]t best, 3 copies [of binders] would suffice" and "these costs are duplicative of costs sought by Defendants for exemplifications/copies." Id. In their reply, the defendants state that this amount "include[s] printing copies of Plaintiff and Defendants' exhibits, trial boards and binders of case law and records for use at trial." Defendants' Reply to Plaintiff's Response and Objections to Defendants' \(^4\) The numbers listed in Exhibit "C" for printing costs total $3,931.76. The Court will utilize the lower number sought by the defendants. Bill of Costs (DE# 188 at 6, 10/2/17). The defendants submitted Exhibit “C” with their reply explaining the breakdown of these costs. *Id.* at 48-51. At the outset, the Court finds that $966.15 for trial binders will be disallowed as duplicative of an entry sought to be recovered for exemplification costs. *See* Discussion, *infra*. Section 1920(4) allows for the recovery of “fees for exemplification and copies of papers necessarily obtained for use in the case.” *E.E.O.C.*, 213 F.3d at 622. The price per copy sought by the defendants is $0.10 per copy for black and white copies and $0.89 for color copies. The total number of copies sought by the defendants (other than the excluded binders) is 25,885 black and white copies and 213 color copies. The number of copies for which the defendants seek reimbursement is unreasonable, and the defendants fail to adequately explain why such a large number of copies were necessary in this case. The Court finds that while some of the copies were necessary for the defense of this action, not all of the copies for which the defendants request reimbursement were necessary to defend this action. The Court will therefore reduce the printing costs sought by the defendants by half. The Court will allow the defendants to recover **$1,482.80** (($3,931.74 minus $966.15) divided by 2). **e. Fees for Witnesses** The defendants seek to recover $160.00 for witness fees. The plaintiff did not object to this cost. *See* Plaintiff’s Response and Objections to Defendants’ Bill of Costs (DE# 166) (DE# 184 at 7, 9/25/17). Accordingly, the Court will allow the defendants to recover **$160.00** for witness fees. f. Fees for Exemplification and the Costs of Making Copies of Any Materials Where the Copies Are Necessarily Obtained for Use in the Case The defendants seek to recover $855.95 in exemplification costs.\(^5\) The plaintiff argues that "the costs sought by Defendants for multiple 'copies of Defendants' trial exhibits for binders for trial' is excessive and duplicative, and Defendants have failed to show that they were necessary for use in the case." Plaintiff's Response and Objections to Defendants' Bill of Costs (DE# 166) (DE# 184 at 7, 9/25/17). Exhibit "C" to the Defendants' Reply to Plaintiff's Response and Objections to Defendants' Bill of Costs (DE# 188 at 6, 10/2/17) provides a breakdown of these costs. The defendants seek to recover $497.15 for "copies of Defendants' trial exhibits for binders for trial - 6 copies [3 for Court - Judge, Clerk and Law Clerk; 3 for Defense Counsel - Attorney, Trial Paralegal and Clients]." *Id.* at 51. The defendants have not explained how this charge is not duplicative of the $966.15 sought under "Fees and Disbursements for Printing" also for "Defendants' trial exhibits for Trial Exhibit Binders for trial (6 copies - 3 for Court - Judge, Clerk and Law Clerk; 3 for Defense Counsel - Attorney, Trial Paralegal and Clients)." *Id.* at 49. The Court will allow the defendants to recover $497.15, the lower of these two amounts for trial binders. The total number of copies sought by the defendants (other than the permitted trial binders) is 3,033 black and white copies and 36 color copies. As with the printing costs, the number of copies for which the defendants seek reimbursement is unreasonable, and the defendants fail to adequately explain why such a large number \(^5\) The numbers listed in Exhibit "C" for exemplification costs total $855.96. of copies were necessary in this case. The Court finds that while some of the copies were necessary for the defense of this action, not all of the copies for which the defendants request reimbursement were necessary to defend this action. The Court will therefore reduce the printing costs (other than the allowed binders) sought by the defendants by half. The Court will allow the defendants to recover $676.56 ($497.15 plus ($358.81 /2)) in exemplification costs. g. Compensation of Interpreters and Costs of Special Interpretation Services under 28 U.S.C. § 1828 The defendants seek to recover $520.00 for use of interpreters. The plaintiff does not object to this cost. Plaintiff's Response and Objections to Defendants' Bill of Costs (DE# 166) (DE# 184 at 8, 9/25/17). Accordingly, the Court will allow the defendants to recover $520.00 for the use of the services of an interpreter. h. Other Costs The defendants seek to recover $2,275.65 for "other costs." The plaintiff argues that the "Defendants provide no explanation whatsoever regarding the intended or actual use of these documents, presumably obtained by subpoena" and "[a]s such, the Defendants' request to tax 'other costs' should be denied." Plaintiff's Response and Objections to Defendants' Bill of Costs (DE# 166) (DE# 184 at 9, 9/25/17). In their reply, the defendants state that these "other costs" are for obtaining the "Plaintiff's medical records and other related information." Defendants' Reply to Plaintiff's Response and Objections to Defendants' Bill of Costs (DE# 188 at 8, 10/2/17). The defendants have attached Exhibit "D" to their reply which includes a list of these costs. The Court will allow the defendants to recover the full amount sought for these costs because they were incurred in order to obtain the plaintiff's medical records. See Bynes-Brooks v. N. Broward Hosp. Dist., No. 16-CV-60416, 2017 WL 3237053, at *4 (S.D. Fla. July 31, 2017) (stating that "so long as these costs were related to records and/or copies necessarily obtained for use in this case, they are taxable"). The defendants incurred these costs in order to obtain medical records for the plaintiff. Those records were necessary in defendants' preparation of the case. Accordingly, the Court will award $2,275.65. **CONCLUSION** In total, the Court will allow the defendants to recover $13,068.16. Accordingly, it is ORDERED AND ADJUDGED that the original Bill of Costs (DE# 166, 8/11/17) is DENIED as moot. It is further ORDERED AND ADJUDGED that the Supplemental Bill of Costs (DE# 189, 10/2/17) is GRANTED in part and DENIED in part. The defendants are awarded $13,068.16 in costs. The Court will enter a separate judgment in favor the defendants as to costs in the total amount of $13,068.16. DONE AND ORDERED in Chambers at Miami, Florida this 10 day of January, 2018. JOHN J. O'SULLIVAN UNITED STATES MAGISTRATE JUDGE Copies to: All counsel of record
January 31, 2017 Representative Taylor Barras Speaker of the House of Representatives P.O. Box 94062 Baton Rouge, Louisiana 70804 Senator John A. Alario, Jr. President of the Senate P.O. Box 94183 Baton Rouge, Louisiana 70804 RE: ACT 501 OF 2016 Dear Mr. Speaker and Mr. President: The Louisiana State Law Institute respectfully submits herewith its report to the legislature relative to raising the age for juvenile offenders in the criminal justice system. Sincerely, William E. Crawford Director WEC/puc Enclosure cc: Senator Jean-Paul J. Morrell Representative John Bagneris Representative Joseph Bouie, Jr. Representative Gary M. Carter, Jr. Representative Patrick Connick Representative Kenny Cox Representative Cedric B. Glover Representative Jimmy Harris Representative Stephanie Hilferty Representative Marcus Hunter Representative Katrina Jackson Representative Edward "Ted" James Representative Terry Landry Representative Walt Leger, III Representative Rodney Lyons Representative Tanner D. Magee Representative C. Denise Marcelle Representative Dustin Miller Representative Helena Moreno Representative Barbara Norton Representative Patricia Smith email cc: David R. Poynter Legislative Research Library email@example.com Secretary of State, Mr. Tom Schedler firstname.lastname@example.org REPORT TO THE LEGISLATURE IN RESPONSE TO ACT 501 OF THE 2016 REGULAR SESSION Relative to raising the age for juvenile offenders in the criminal justice system Prepared for the Louisiana Legislature on January 31, 2017 Baton Rouge, Louisiana LOUISIANA STATE LAW INSTITUTE CHILDREN’S CODE COMMITTEE Jan Byland Baton Rouge Andrea B. Carroll Baton Rouge Paula C. Davis Baton Rouge Ernestine S. Gray New Orleans Margot E. Hammond New Orleans Kaaren Hebert Lafayette Joan E. Hunt Baton Rouge Nancy Amato Konrad Metairie Hector Linares Baton Rouge Lucy McGough Baton Rouge Martha Morgan Baton Rouge Richard M. Pittman Baton Rouge S. Andy Shealy Ruston Kristi Garcia Spinosa Baton Rouge Carmen D. Weisner Baton Rouge * * * * * * * * * * * * Karen Hallstrom, Co-Reporter Isabel Wingerter, Co-Reporter Jessica G. Braun, Staff Attorney LOUISIANA STATE LAW INSTITUTE CODE OF CRIMINAL PROCEDURE COMMITTEE E. Pete Adams, Jr. Baton Rouge Sue Bernie Baton Rouge Kyla M. Blanchard-Romanach Baton Rouge James E. Boren Baton Rouge Bernard E. Boudreaux, Jr. Baton Rouge Camille Buras New Orleans Greg C. Champagne Hahnville Susan M. Chehardy New Orleans Louis R. Daniel Baton Rouge Emma J. Devillier Baton Rouge John E. Di Giulio Baton Rouge Michelle E. Ghetti Baton Rouge Craig F. Holthaus Baton Rouge Robert W. “Bob” Kostelka Monroe John Ford McWilliams, Jr. Shreveport Douglas P. Moreau Baton Rouge John Wilson Reed New Orleans Perry R. Staub, Jr. New Orleans Special Advisor Alvin Turner, Jr. Gonzales H. Clay Walker Shreveport * * * * * * * * * * * Frank Foil, Co-Chair Robert Morrison, III, Co-Chair Judge Guy Holdridge, Acting Reporter Mallory C. Waller, Staff Attorney AN ACT To amend and reenact Children's Code Arts. 305(A)(2), 306(D), and 804(1) and to enact Chapter 13-B of Title 15 of the Louisiana Revised Statutes of 1950, to be comprised of R.S. 15:1441 and 1442, and Children's Code Art. 306(G), relative to juvenile jurisdiction; to provide for a child who commits a delinquent act before a certain age; to provide for transfer of juveniles to adult detention centers pending trial; to create the Juvenile Jurisdiction Planning and Implementation Committee; to provide for membership, authority, duties, and responsibilities; to provide for directives to the Louisiana State Law Institute, Louisiana Judicial Council, and Department of Children and Family Services; to provide for an effective date; and to provide for related matters. Be it enacted by the Legislature of Louisiana: Section 1. Chapter 13-B of Title 15 of the Louisiana Revised Statutes of 1950, comprised of R.S. 15:1441 and 1442, is hereby enacted to read as follows: CHAPTER 13-B. JUVENILE JURISDICTION PLANNING AND IMPLEMENTATION ACT §1441. Short title This Chapter shall be known and may be cited as the "Juvenile Jurisdiction Planning and Implementation Act". §1442. Louisiana Juvenile Jurisdiction Planning Implementation Committee: composition; authority; responsibilities A. The Louisiana Juvenile Jurisdiction Planning and Implementation Committee, hereafter referred to as the "committee", is hereby created as a committee of the Juvenile Justice Reform Act Implementation Commission established pursuant to R.S. 46:2751 et seq. B. The committee shall have the following authority, duties, and responsibilities: (1) Not later than January 1, 2017, the committee shall develop and submit to the commissioner of administration, the president of the Senate, and the speaker of the House of Representatives a plan for full implementation of the provisions of this Chapter. The plan shall include recommendations for changes required in the juvenile justice system to expand jurisdiction to include persons seventeen years of age. These recommendations may include the following items: (a) The development of programs and policies that can safely reduce the number of youth in the juvenile justice system, including expanded use of diversion where appropriate; development and use of civil citation programs; use of evidence-based and promising services wherever possible; and reinvestment programs targeting the expanded use of community-based alternatives to secure, nonsecure, and pre-disposition custody. (b) The development of comprehensive projections to determine the long-term distribution of placement capacity for youth in the juvenile justice system. (c) An analysis of the impact of the expansion of juvenile jurisdiction to persons seventeen years of age on state agencies and a determination of which state agencies shall be responsible for providing relevant services to juveniles, including but not limited to mental health and substance abuse services, housing, education, and employment. (2) Not later than April 1, 2017, and quarterly thereafter, the committee shall submit a written status report to the commissioner of administration, the president of the Senate, and the speaker of the House of Representatives on implementation of the plan as provided in this Subsection. (3) The committee shall have such powers, authority, and prerogatives as provided for the Juvenile Justice Reform Act Implementation Commission pursuant to R.S. 46:2754 et seq. C. The committee shall be composed of the following members: (1) Each member of the Juvenile Justice Reform Act Implementation Commission shall be an ex officio member. (2) Two sitting Louisiana judges: one appointed by the president of the Louisiana District Judges Association and one appointed by the president of the Louisiana Council of Juvenile and Family Court Judges. (3) The deputy secretary of the office of juvenile justice, or his designee. (4) The superintendent of the state Department of Education, or his designee. (5) The executive director of the Louisiana Sheriffs' Association, or his designee. (6) The president of the Louisiana Juvenile Detention Association, or his designee. (7) An attorney appointed by the Louisiana Public Defender Board that is an expert in juvenile defense. (8) The executive director of the Children's Cabinet. (9) The director of the Institute for Public Health and Justice, or his designee. (10) Two child or youth advocates, one appointed by the president pro tempore of the Senate and one appointed by the speaker pro tempore of the House of Representatives. (11) Two parents of children who have been involved in the juvenile justice system, one appointed by the executive director of the Cecil J. Picard Center for Child Development and Lifelong Learning and one appointed by the executive director of the Children's Coalition for Northeast Louisiana. (12) An expert in juvenile justice, appointed by the chair of the Children's Code Committee of the Louisiana State Law Institute. (13) Two youth representatives who have been prosecuted in criminal court at the age of seventeen, one appointed by the executive director of LouisianaChildren.org and one appointed by the executive director of the Family and Youth Counseling Agency of Lake Charles, Louisiana. (14) A representative of the Police Jury Association of Louisiana. (15) An attorney appointed by the Louisiana District Attorneys Association that is an expert in juvenile prosecution. D.(1) All appointments to the committee shall be made not later than September 1, 2016. Any vacancy on the committee shall be filled by the respective appointing authority. (2) The executive director of the Children's Cabinet shall serve as chair of the committee and shall convene the committee no later than October 1, 2016. (3) The members of the committee shall serve without compensation, except the compensation to which they may be individually entitled to as a member or employee of their respective organization or agency. (4) A majority of the total committee membership shall constitute a quorum and any official action by the committee shall require an affirmative vote of a majority of the quorum present and voting. (5) The committee shall conduct meetings as it deems necessary to fully and effectively perform its duties and accomplish the objectives and purposes of this Chapter and may receive testimony and information relative to any of the subjects enumerated in this Chapter. (6) The committee shall terminate on December 31, 2020. Section 2. Children's Code Art. 305(A)(2), 306(D), and 804(1) are hereby amended and reenacted and Children's Code Art. 306(G) is hereby enacted to read as follows: Art. 305. Divestiture of juvenile court jurisdiction; original criminal court jurisdiction over children; when acquired A.(1) * * * (2) Thereafter, the child is subject to the exclusive jurisdiction of the appropriate court exercising criminal jurisdiction for all subsequent procedures, including the review of bail applications, and the child shall be transferred forthwith to the appropriate adult facility for detention prior to his trial as an adult court exercising criminal jurisdiction may order that the child be transferred to the appropriate adult facility for detention prior to his trial as an adult. * * * Art. 306. Places of detention; juveniles subject to criminal court jurisdiction * * * D. If at the conclusion of the continued custody hearing, the court determines that the child meets the age requirements and that there is probable cause that the child has committed one of the offenses enumerated in Article 305, the court shall order him held for trial as an adult for the appropriate court of criminal jurisdiction. The child shall appropriate court of criminal jurisdiction may thereafter order that the child be held in any facility used for the pretrial detention of accused adults and the child shall apply to the appropriate court of criminal jurisdiction for a preliminary hearing, bail, and for any other rights to which he may be entitled under the Code of Criminal Procedure. * * * G. Notwithstanding any provision of law to the contrary, a child who is subject to criminal jurisdiction pursuant to Article 305 shall not be detained prior to trial in a juvenile detention facility after reaching the age of eighteen if the governing authority with funding responsibility for the juvenile detention facility objects to such detention. * * * Art. 804. Definitions As used in this Title: (1)(a)"Child" means any person under the age of twenty-one, including an emancipated minor, who commits a delinquent act before attaining seventeen years of age. (b) After June 30, 2018, "child" means any person under the age of twenty-one, including an emancipated minor, who commits a delinquent act on or after July 1, 2018, when the act is not a crime of violence as defined in R.S. 14:2, and occurs before the person attains eighteen years of age. (c)(i) After June 30, 2020, "child" means any person under the age of twenty-one, including an emancipated minor, who commits a delinquent act on or after July 1, 2020, and before the person attains eighteen years of age. (ii) Notwithstanding Item (i) of this Subparagraph, a child who has attained the age of seventeen shall be subject to criminal jurisdiction pursuant to Article 305 or 857. Section 3.(A) The Louisiana State Law Institute is hereby directed to study, and to recommend to the Legislature in a written report, such other amendments and additions to the Louisiana Children's Code, Louisiana Code of Criminal Procedure, and the Louisiana Revised Statutes as may be appropriate to effectuate the purpose of this Act to include seventeen-year-olds in the juvenile justice system. The Louisiana State Law Institute shall make its report, and shall recommend such legislation as it may deem appropriate, to the Legislature by March 1, 2017. (B) The Louisiana Judicial Council is hereby requested to study, and to recommend to the Louisiana Supreme Court, such amendments and additions to Louisiana's Rules of Court as may be appropriate to effectuate the purpose of this Act to include seventeen-year-olds in the juvenile justice system. (C) The Department of Children and Family Services is hereby directed to study, and to recommend for promulgation into law through the Administrative Procedure Act, such new or amended regulations for the safe operation of the state's juvenile detention centers as may be appropriate given the inclusion of seventeen-year-olds in the juvenile justice system. Section 4. This Act shall become effective upon signature by the governor or, if not signed by the governor, upon expiration of the time for bills to become law without signature by the governor, as provided by Article III, Section 18 of the Constitution of Louisiana. If vetoed by the governor and subsequently approved by the legislature, this Act shall become effective on the day following such approval. Section 5. This Act shall be known as the "Raise the Age Louisiana Act of 2016". PRESIDENT OF THE SENATE SPEAKER OF THE HOUSE OF REPRESENTATIVES GOVERNOR OF THE STATE OF LOUISIANA APPROVED: ___________ January 31, 2017 To: Representative Taylor F. Barras Speaker of the House of Representatives P.O. Box 94062 Baton Rouge, Louisiana 70804 Senator John A. Alario, Jr. President of the Senate P.O. Box 94183 Baton Rouge, Louisiana 70804 REPORT TO THE LEGISLATURE IN RESPONSE TO ACT 501 OF THE 2016 REGULAR SESSION Section 3(A) of Act 501 of the 2016 Regular Session, the Raise the Age Louisiana Act of 2016, directs the Law Institute to study and recommend amendments and additions to the Children’s Code, Code of Criminal Procedure, and Revised Statutes as may be appropriate to effectuate the purpose of the Act, which is to include seventeen-year-olds in the juvenile justice system. In fulfillment of this request, the Law Institute assigned the project to its Children’s Code and Code of Criminal Procedure Committees. The Children’s Code and Code of Criminal Procedure Committees each conducted background research to determine which provisions of these Codes and the Revised Statutes may need to be amended to effectuate the purpose of Acts 2016, No. 501, the raising of the juvenile offender age from seventeen to eighteen. The Committees met separately to discuss these provisions and to determine which amendments and additions should be recommended to the legislature. Each Committee recommended various amendments to the provisions of their respective Codes as well as the Revised Statutes, and their suggested revisions are reproduced below. The Law Institute recognizes that, pursuant to Acts 2016, No. 501, the raising of the juvenile offender age from seventeen to eighteen will be a two-step process, with nonviolent crimes effective June 30, 2018 and violent crimes effective June 30, 2020. As a result, the legislature may have to make a policy determination as to when the following suggested revisions should be incorporated into the Children’s Code, Code of Criminal Procedure, and Revised Statutes. Further, in making its determinations, the Code of Criminal Procedure Committee hesitated to recommend amendments that would change the definitions of substantive crimes and their penalties to those having been committed by eighteen-year-olds rather than seventeen-year-olds. The Committee expressed concern that without further review of the underlying policy considerations by the legislature, recommending such amendments would have the unintended consequence of decriminalizing these offenses entirely for seventeen-year-old offenders. As a result, rather than recommending amendments to these provisions, the Committee concluded that the following list of substantive crimes should be submitted to the legislature for its consideration of whether, in accordance with the purpose of Acts 2016, No. 501, these crimes and their penalties should be redefined to apply only to eighteen-year-old perpetrators: R.S. 14:28(C), 43.1, 43.2(C), 43.3, 73.8(D), 80(A)(1), 80.1(A), 81(H)(2), 81.1(E)(5), 81.2(A), 81.3(A), 82.1(A)(1), 86(A), 89.1(C)(2), 91.13(A), 92(A), 92.3(A), 93(A), 93.2.3(A)(1), and 95.8; and R.S. 15:562.3(A) and 1403.1(B)(2). In conjunction with this determination, the legislature may also wish to consider the meaning of the terms “juvenile” and “minor” as used throughout these provisions. Additionally, both the Children’s Code and Code of Criminal Procedure Committees concluded that in addition to provisions concerning the age of juvenile offenders, the legislature may also wish to amend provisions relating to the age of juvenile victims for purposes of consistency. As a result, the Committees also compiled the following list of provisions pertaining to seventeen-year-old victims: Children’s Code Articles 116(9.1), 323(2)(a), 324(B), 610(F), 728(4), 811.1(G), 811.3(3), and 884.1; Code of Criminal Procedure Articles 571.1, 573(4), and 893(E)(1)(b); and R.S. 14:28(C), 67.16(C), 80(A)(1), 80.1(A), 81(A), 81.1, 81.1.1, 81.2, 81.3, 81.4, 89.1(A)(1)(f), 91.13(A), 92(A), 93(A), 93.2.3(A)(1), 106, 283(B)(4), 283.2(A)(1), 403.7(B)(3), and 403.8(B)(3); R.S. 15:283(E)(1), 440.2(C)(1), 539.2(A), 541(24)(a), 542(F)(4), and 1403.1(B)(2); and R.S. 40:1023.1. **Suggested Revisions** **Children’s Code Articles** **Article 804. Definitions** As used in this Title: (1)(a) "Child" means any person under the age of twenty-one, including an emancipated minor, who commits a delinquent act before attaining seventeen years of age. (b) After June 30, 2018, "child" means any person under the age of twenty-one, including an emancipated minor, who commits a delinquent act on or after July 1, 2018, when the act is not a crime of violence as defined in R.S. 14:2, and occurs before the person attains eighteen years of age. (c)(i) After June 30, 2020, "child" means any person under the age of twenty-one, including an emancipated minor, who commits a delinquent act on or after July 1, 2020, and before the person attains eighteen years of age. (ii) Notwithstanding Item (i) of this Subparagraph, a child who has attained the age of seventeen shall be subject to criminal jurisdiction pursuant to Article 305 or 857. (2) "Child care institution" means a nonprofit, licensed private or public institution which accommodates no more than twenty-five children and which is not a detention facility, a forestry camp, a training school, or any other facility operated primarily for the detention of children who are determined to be delinquent. (3) "Delinquent act" means an act committed by a child of ten years of age or older which if committed by an adult is designated an offense under the statutes or ordinances of this state, or of another state if the offense occurred there, or under federal law, except traffic violations. It includes an act constituting an offense under R.S. 14:95.8, an act constituting an offense under R.S. 14:81.1.1(A)(2), and a direct contempt of court committed by a child. "Delinquent act" shall not include a violation of R.S. 14:82, 83.3, 83.4, 89, or 89.2 for a child who, during the time of the alleged commission of the offense, was a victim of trafficking of children for sexual purposes pursuant to R.S. 14:46.3(E). (4) "Delinquent child" means a child who has committed a delinquent act. (5) "Felony-grade delinquent act" means an offense that if committed by an adult, may be punished by death or by imprisonment at hard labor. "Felony-grade delinquent act" shall not include a violation of R.S. 14:82, 83.3, 83.4, 89, or 89.2 for a child who, during the time of the alleged commission of the offense, was a victim of trafficking of children for sexual purposes pursuant to R.S. 14:46.3(E). (6) "Insanity" means a mental disease or mental illness which renders the child incapable of distinguishing between right and wrong with reference to the conduct in question, as a result of which the child is exempt from criminal responsibility. (7) "Juvenile" means a child under eighteen years of age who has been accused of committing a delinquent act. (7 8) "Mental incapacity to proceed" means that, as a result of mental illness or developmental disability, a child presently lacks the capacity to understand the nature of the proceedings against him or to assist in his defense. (8 9) "Misdemeanor-grade delinquent act" means any offense which if committed by an adult is other than a felony and includes the violation of an ordinance providing a penal sanction. (9 10) "Sexually exploited child" means any person under the age of eighteen who has been subject to sexual exploitation because the person: (a) Is a victim of trafficking of children for sexual purposes under R.S. 14:46.3. (b) Is a victim of child sex trafficking under 18 U.S.C. 1591. * * * Article 837. Procedure after determination of mental capacity H. An out-of-home placement or commitment shall be in a separate unit and program from an adult forensic program unless the child is seventeen eighteen years of age or older and the court finds, after a contradictory hearing, that the child can be appropriately treated in an R.S. 13:1621. Juvenile court for the parish of East Baton Rouge; establishment; jurisdiction A. There shall be a separate juvenile court for the parish of East Baton Rouge which shall be a court of record and shall be known as the "Juvenile Court for the Parish of East Baton Rouge". There shall be two judges of the juvenile court, who shall preside over that court. The court shall have exclusive jurisdiction in the following proceedings: (1) All proceedings in the interest of children under eighteen years of age alleged to be delinquent, except as provided in R.S. 13:1570 and 1571.1 through 1571.4 and Code of Juvenile Procedure Article 106; and all proceedings in the interest of children under eighteen years of age alleged to be in need of supervision or in need of care. R.S. 14:40.7. Cyberbullying (D)(2) When the offender is under the age of seventeen eighteen, the disposition of the matter shall be governed exclusively by the provisions of Title VII of the Children's Code. R.S. 14:73.10. Online impersonation (C)(2) When the offender is under the age of seventeen eighteen years, the disposition of the matter shall be governed exclusively by the provisions of Title VII of the Children's Code. R.S. 14:81.1.1. "Sexting"; prohibited acts; penalties A.(1) No person under the age of seventeen eighteen years shall knowingly and voluntarily use a computer or telecommunication device to transmit an indecent visual depiction of himself to another person. (2) No person under the age of seventeen eighteen years shall knowingly possess or transmit an indecent visual depiction that was transmitted by another under the age of seventeen years in violation of the provisions of Paragraph (1) of this Subsection. R.S. 14:92.1. Encouraging or contributing to child delinquency, dependency, or neglect; penalty; suspension of sentence; definitions B. By the term "delinquency", as used in this section, is meant any act which tends to debase or injure the morals, health or welfare of a child; drinking beverages of low alcoholic content or beverages of high alcoholic content; the use of narcotics, going into or remaining in any bawdy house, assignation house, disorderly house or road house, hotel, public dance hall, or other gathering place where prostitutes, gamblers or thieves are permitted to enter and ply their trade; or associating with thieves and immoral persons, or enticing a minor to leave home or to leave the custody of its parents, guardians or persons standing in lieu thereof, without first receiving the consent of the parent, guardian, or other person; or begging, singing, selling any article; or playing any musical instrument in any public place for the purpose of receiving alms; or habitually trespassing where it is recognized he has no right to be; or using any vile, obscene, or indecent language; or performing any sexually immoral act; or violating any law of the state ordinance of any village, town, city, or parish of the state. The term "juvenile", as used in this section, refers to any child under the age of seventeen eighteen. Lack of knowledge of the juvenile's age shall not be a defense. R.S. 15:902.1. Transfer of adjudicated juvenile delinquents Notwithstanding Title VIII of the Louisiana Children's Code or any other provision of law, the secretary of the department may promulgate rules and regulations to authorize the transfer of adjudicated juvenile delinquents to adult correctional facilities when the delinquents have attained the age of seventeen eighteen years, the age of full criminal responsibility. R.S. 15:1031. Establishment of parish schools for youths authorized The governing authorities of the parishes may establish, within their parishes, an industrial school for male youths of under the age seventeen of eighteen years, and under, convicted in the juvenile court of the parish for offenses within the jurisdiction of the juvenile court. Where any school has been so established, it shall be employed only for the delinquent juveniles convicted within the parish, and shall be known as the "Parish Industrial School for Youths." R.S. 15:1096.2. Purpose A. The purpose of the commission shall be to assist and afford opportunities to preadjudicatory and postadjudicatory children who enter the juvenile justice system, or who are children in need of care or supervision, to become productive, law-abiding citizens of the community, parish, and state by the establishment of rehabilitative programs within a structured environment and to provide physical facilities and related services for children, including the housing, care, supervision, maintenance, and education of juveniles under the age of seventeen eighteen years, and for juveniles seventeen eighteen years of age and over who were under seventeen eighteen years of age when they committed an alleged offense, throughout the parishes within the district and other participating parishes. R.S. 15:1098.3. Purpose The commission may assist and afford opportunities to preadjudicatory and postadjudicatory children who enter the juvenile justice system to become productive, law-abiding citizens of the community, parish, and state by the establishment of rehabilitative programs within a structured environment and provide physical facilities and related services for children, including the housing, care, supervision, maintenance, and education of juveniles under the age of seventeen eighteen years, and for juveniles seventeen eighteen years of age and over who were under seventeen eighteen years of age when they committed an alleged offense, throughout St. James Parish and participating parishes. R.S. 15:1099.3. Purpose A governing authority may assist and afford opportunities to preadjudicatory and postadjudicatory children who enter the juvenile justice system to become productive, law-abiding citizens of the community, parish, and state by the establishment of rehabilitative programs within a structured environment and provide physical facilities and related services for children, including the housing, care, supervision, maintenance, and education of juveniles under the age of seventeen eighteen years, and for juveniles seventeen eighteen years of age and over who were under seventeen eighteen years of age when they committed an alleged offense, throughout the parish and participating parishes. R.S. 46:1933. Organization and powers B. Any multiparish juvenile detention home district may acquire title by purchase or donation to real and personal property for public purposes; may own, operate or maintain facilities for the housing, care, supervision, maintenance and education of juveniles under the age of seventeen eighteen years, and for juveniles seventeen eighteen years of age and over who were under seventeen eighteen years of age when they committed an alleged offense.
who am a merciful man. For my part, saith the soul of a merciful man, I bless God my estate is comfortable; I want nothing, I have everything about me my heart can desire, but the saints about me are in misery. Oh that I could help them that are in misery! Men are made sensible by them that are in misery. A saint's mercy is drawn forth by the miseries of others that are about him. But you will say, The papists and the heathens they are merciful men, they are pitiful. But what difference is there then between the mercy of a man truly gracious and the mercy of others? Therefore you may remember in the description of mercy in the general I told you that it was a grace of God's Spirit, whereby the mercy of a man is drawn forth to them that are in misery. SERMON XXII. OR, THE SEVERAL WORKINGS OF MERCY IN THE HEART. 'Blessed are the merciful: for they shall obtain mercy.'—Mat. v. 7. The work we have now to do is to shew you, First, The several workings of mercy in the heart. Secondly, The motives unto it. Thirdly, The object of mercy. Fourthly, The gracious manner of the work of mercy. And then we shall come to this promise that is here made to them that are merciful, that they shall obtain mercy. For the several workings of mercy in the heart, they are these: The first act of mercy upon the taking notice of the miseries of others, it grieves for them; there is a compassion towards those that are in misery. A merciful man will not slight the miseries of others, much less will he despise them, or contemn others that are in misery. A merciful man doth not think the miseries of others not at all to concern him, but he looks upon them as concerning himself; he is grieved, his heart is touched with the miseries of others. Secondly, From these there is a working desire in his soul to relieve them. Oh that I could tell how to relieve and help souls as I see to be any way in misery, bodily misery, or spiritual misery! Thirdly, The heart is solicitously careful about ways of help; not only wishes and desires to help, but the thoughts of the mind are very solicitous what way I may compass to be helpful to those that are in misery. You have an excellent scripture for that in Prov. xiv. 22, 'Mercy and truth shall be to them that devise good.' Here is the merciful man described, and the promise of mercy to him; he is one that deviseth good. A merciful man looks upon others in misery, casts about him in his thoughts when he lies upon his bed, and is devising how he may do good. I am here lying quietly in my bed; I am warm, others are in misery; how may I be any ways useful to them, to do them any good? He doth devise good: and in Isa. xxxiii. 8, 'The liberal deviseth liberal things.' A merciful man is not only liberal and helpful when you put him upon occasion, when you come to him, when he cannot for shame, but he must give you something. No; but he himself deviseth liberal things; he plots with himself what he may do to be instrumental for the good of those that are in a sad condition. A covetous man doth not more devise how he might gain to himself to get a good bargain, than a merciful man devises how he may distribute, how he may do good. That is the third act of mercy, it is solicitously careful. Fourthly, A timely improvement. He doth not keep his mercy in his own thoughts, but he doth improve what he hath for the good of others that are in misery, if he hath an estate, parts, friends, strength of body; or if he be poor and mean, and hath nothing else, then his prayers, all that he hath, shall be some way or other improved for the help of such as are in misery. A merciful man doth not think that God hath given him any good thing merely for himself, but for improvement. I was not born for myself, I have not an estate for myself, neither have I parts of nature or grace for myself, but I have them for to be of public good as much as may be. That is the fourth thing, a careful improvement. Fifthly, The act of mercy is to be willing to part with much for others. Improve it I may for their good, or lend them, but part with it I will not; but mercy will part with anything that it hath. It is my own. But how is it my own? it is my own as a steward, and not to be used as I please; therefore if I see that the Lord hath need of it, or my brother hath need, that God may have glory, and good may be done, I am as willing to part with it as ever I was to receive it. Sixthly, If any hath offended he is ready to pardon, full of pity that way. Therein men of mean estates may be merciful as well as others, though I see miscarriages in others that hath need of me; though I see they are unthankful, they are unworthy, yet mercy passes by unworthiness and wrongs. Seventhly, It keeps back justice for a time. Though it will not hinder justice, but that it shall have her glory in time, yet mercy may cause a forbearance of the stroke of justice, when justice is ready to strike the stroke; mercy comes in, as the mercy of God, when justice is striking the stroke, it comes in and pleads, Lord spare, spare yet a little while! As when Abraham was lifting up the knife to cut the throat of Isaac, the angel cries from heaven, Abraham, stay thy hand! As the mercy of God doth, so the mercy of man forbears justice, and will not have justice in the rigour and full extent of it to be executed; it causes to forbear a while, to see whether there may not something be done wherein the offender may be spared and justice not wronged, and it will moderate the work of justice as much as it can. Eighthly and lastly, Mercy will cause one to put oneself into the same condition as those are in that are in misery. Whether it be in regard of poverty or pain, or what kind soever it be, mercy causes one to put himself into the same state, to be in bonds with those that are in bonds, and to weep with those that weep. It is true I am in this comfortable condition myself, and have abundance of choice enjoyments, but what are all these to me so long as others suffer hard things? What if I were in bonds with them, and if I were spoiled of all that I have as they are—what if God had put me into the same condition that they are, how should I be affected? And as I would have others to pity me if I were in the like condition, so I labour in my heart to pity them. Here is a merciful man, a merciful woman. These are the several workings of the bowels of mercy. Secondly, Mercy, when it is a work of the grace of God, and not merely some natural work, as may be in natural men, there mercy arises upon gracious motives; when the heart works in ways of mercy graciously, it hath gracious motives to raise up this working, and to maintain these workings of mercy. First, The soul looks upon God as the God of mercy, and looks upon the excellency of mercy in God himself. Oh mercy, it is lively in God! the bowels of God's compassion yearns towards his creatures in misery; and therefore, if I be a child of God, why should it not yearn in me too? why should there not be a likeness in me to the God that I profess to be my Father? Secondly, I myself have need of mercy every day. I live upon mercy; it is mercy that maintains me; it is mercy that keeps me out of hell; it is mercy that provides for me; and if I have such need of mercy, and live upon it, then why should not I be merciful towards others? Thirdly, I have not only need of it, but I have received mercy. The Lord hath been merciful to me, merciful to my body, merciful to my soul. I have had preventing mercy, delivering mercy, healing mercy, comforting mercy, saving mercies; mercies of all sorts when I was in miseries. I have cried, the Lord pitied me, and hath helped me. Now, I that have received so much mercy, it is infinitely equal that I should be merciful towards my brethren. Fourthly, When the mercy of God comes from grace, it comes from a sight of the mercy of God in Christ; not only that God is merciful, and hath been merciful to me in a way of common providence, but I look upon the mercy of God in Christ, the tender mercies of God in Christ. A man in a natural way may come to see and know that God is merciful; but when I am merciful from a sight of God's mercy to me in Jesus Christ, and therefore I shew mercy to others, this is right mercy. In Christ the beams of God's mercy are concentrated as in a burning-glass; they are all concentrated together in one; and when they shine through Christ to my soul, then they warm my heart. The beams of the sun, when they shine scattered up and down in the air, they cause some light, glory, and heat; but when they are concentrated in a burning-glass, then they will be so hot as to burn one's clothes. So the beams of God's mercy in common providence, they will heat the hearts of men, and move them to natural pity; but when our mercy comes from the concentrating of the mercy of God to my soul in Jesus Christ, as it were the burning-glass, then how do they warm and enlarge the heart of a merciful man; when he can set his soul under the beams of God's mercy, contracted and shining through the burning-glass of Jesus Christ himself, and when the heart comes to be warmed with mercy thus, then it is a gracious work indeed, and mercy beyond that of a natural man. Fifthly, The consideration of my unworthiness. I have had mercy, and not only common mercy, but mercy in Christ, who am so unworthy; and why hath God made any difference between me and others? What is it that causes a difference, so that such a one should be poor, and I have an estate; that they are born of beggars, and I of parents that hath left me a comfortable estate? Or if providence hath cast it so, though born of as good parents as I, yet they are in misery and I in comfort. Many of you may say you came to the city but with a staff in your hand, and what an estate hath God raised you to! If the grace of mercy works in you the consideration of your unworthiness of anything, that God should make a difference between you and others out of free-grace, and from nothing of yourselves, this doth mightily enlarge bowels of mercy. Sixthly, Further, the consideration of the relation that these have to God that are in misery. Let it be any creature, yet it hath some relation to God; any brute creature, it is the creature of God, and so it hath relation to him—it is the work of God's hands. But if he be a man, much more if he be a Christian, much more if a saint, much more the relation that a thing hath to God, and being in misery, that moves a gracious man; it doth not move one that is moved in a way of natural pity, but those that are merciful in a gracious way. The relation that anything hath to God, that is a mighty motive to mercy. Seventhly, The consideration that I shall honour God in this way of mercy. Not merely that I would help others in misery, or be well spoken of, or the like, but I shall honour God in this way of mercy; and it is this that moves my heart. Eighthly, And the very love to the exercise of mercy itself; and love to such as are in misery, though they be strangers, whosoever they be, this works in a merciful heart. And that is the second thing, the motives, or what it is that sets a merciful man on work in the ways of mercy. For the object, but a word—for it was intimated in the relation that a thing hath to God. We are to be merciful, First, To all that are in misery. A good man is merciful to his beast. Look upon your beast, and consider, there is not such a distance between you and that; you are all of one lump. God might have made you a toad, the vilest creature that is, and therefore God expects that you should use his creatures that he hath an interest in, that you should use them mercifully, and not cruelly. Secondly, We are to be merciful to all mankind. If you do not give to such and such a one as a man, give it to human nature, so far as not to suffer them to perish, except it be in some cases that the Scripture would have others to perish if they continue obstinately in wickedness. As, he that will not work, let him not eat, saith the Scripture; or if they sin, in the way of justice, God doth will that wicked men should perish in their sin—that is, when in a way and course of justice they come to be dealt with; but otherwise, except it be in a way and course of justice that they may be dealt with, we should have pity upon wicked men when the hand of God is upon them in bringing misery. It is true there is a time coming that the saints shall be so swallowed up with God, with love to God, as they shall pity wicked men no more—yea, shall have no kind of compassion towards them hereafter, whenas it shall be revealed fully that they are reprobates, and that this is the way to honour himself eternally, to withdraw all mercy from them, then the saints shall not pity them. But in the meantime, here in this world, we are to pity them; because, though they be now wicked, we do not know but that they may belong to God, and be made vessels of mercy. Such a wicked blasphemer, and wicked unclean person—the most monstrous wretch that is—who knows but that God may set him apart to be a vessel of mercy to the glory of his free grace; and therefore, because you know not yet the contrary, mercy should work towards him, to pity his soul and body. Thirdly, The next thing is, that as we should be merciful to all that are in misery, so especially to them in respect of their souls. There is many men and women have pitiful hearts to others; when they see them poor, naked, and ready to starve, then they pity them. But you shall have such pitiful men and women to have no compassion towards their souls; but where mercy is true, it is towards the soul in the first place, and then towards their bodies. Fourthly, Further, for the object of mercy, the less guilt there is upon any, the more he is to be pitied in his misery. As thus, when any one comes into misery merely by the hand of God, and not by their own wickedness, then there is much mercy to be shewn. I confess, though men should be brought into misery by their wickedness, yet still—except it be in a way and course of legal proceeding in a course of justice—they must not be left to perish; but if it be merely the hand of God upon them, and not their own wickedness that hath brought it upon them, much mercy should be shewn to them. Such as by the providence of God, either by fire, or by wicked men that have broken in upon them, and not through their own fault, they have lived conscientiously, and yet God, by some hand of providence, hath swept away all their estate; abundance of mercy should be shewn to them. But above all, though we are to do good unto all, yet especially unto the household of faith; to the saints especially our mercy should be shewn unto, for God shews most mercy to them. But it shall be sufficient to name the objects of mercy. For the gracious manner of shewing mercy to those that are in misery, mercy must have these qualifications: First, I must never be so merciful as to go against any rule of justice; but there must be a sweet concord between both. Mark how they are knit together: 'Blessed are they that hunger and thirst after righteousness,' which is not only the righteousness of Christ, but between man and man, and 'Blessed are the merciful.' We must be so merciful, as yet to be righteous. Grace hath a blessed mixture in it; and though one vice be contrary to another, yet one grace is never contrary to another. Justice and mercy are never opposite one to another, but they may have a gracious mixture. I may be a merciful man, and yet hunger after righteousness, that righteousness may prevail in the world. That must be considered in the first place, for the gracious manner of the work of mercy. Secondly, I must be so merciful as not to do hurt to those that I think to shew mercy to, or to do hurt to others by them. As thus, when men are in misery, for me to shew mercy so as to harden them in their evil way, this is no gracious act; this is a foolish pity. Or to shew mercy to one so as to hurt others; many times mercy may be shewn to one, that is cruelty to many others. Now, in Ps. cxii. 5, there the Holy Ghost, speaking of a merciful man, he saith that 'he guides his affairs with discretion.' He guides them in a discreet way; he doth not do the work of mercy in a lavish way, but considers wisely of the poor, and guides his affairs with discretion. Thirdly, In the exercise of mercy there must be much simplicity of heart: Rom. xii. 8, 'He that giveth, let him do it with simplicity.' You will say, What is the meaning of that? The meaning of it is this: First, Not to have any by and squint-eyed aims in my giving; but to do it in the simplicity of my heart, without any by and squint-eyed aims, and in simplicity. Many are merciful; they do things that are good, but they have squint-eyed aims at themselves. Secondly, Simplicity—that is, not to be partial in the ways of my mercy. God would have me to shew mercy to one more than to another, according as there is reason, but not to be merciful in a way of partiality—that is, though others stand in as much need of my mercy as this man doth, and every way deserves it as well, yet out of private respects I let the course of my mercy run this way rather than the other. This is not to do it out of simplicity. Lastly, We must so shew our mercy as that we must be sure to tender up that mercy that we shew to others for acceptation in Jesus Christ: to tender it up in Jesus Christ that it may be accepted by God. Lord, may such a soul say, I am unworthy thou shouldst shew any mercy to me, or that thou shouldst accept of any mercy that I tender up to thee. This we see admirably set forth in Nehemiah, who was one of the mercifullest men that ever we read of; yet saith he, chap. xi. 22, 'Remember me, O my God, concerning this also, and spare me according to the greatness of thy mercy.' He was a merciful man, and yet he pleads to be accepted in mercy for the failings that passed from him in the shewing of that mercy; and here in the text, 'Blessed are the merciful: for they shall obtain mercy.' They shall obtain mercy for those failings that they commit in the shewing of their mercy. Thus you see who this merciful man is. We shall now come to shew that he is a blessed man: Prov. xxii. 9, 'He that hath a bountiful eye shall be blessed, for he giveth of his bread to the poor.' To open unto you the blessedness of this merciful man, take it in these particulars: First, When God would describe a man truly godly, he calls him out by this very character, that he is a merciful man: Ps. xxxii. 6, 'For this shall every one that is godly pray unto thee;' in the original, יִשְׁכַּב, it is the 'kind man.' Godly men are called by this denomination of kind ones; and so wherever we have the word 'godly' and 'saints' in the Old Testament, it is the same with that we have in the New Testament, where they are called 'godly saints' and 'godly ones.' It is the same with 'merciful men:' to note thus much, that mercy it is the same with godliness. Now take righteousness, as I opened it in the former verse, for the grace of sanctification, and so this mercifulness is a part of that sanctification. It is a part of that righteousness which I shewed you was of such excellency in Ps. xxxii. 6. God doth not instance in any particular grace but in this of mercy: 'The merciful man shall seek him in a time when he may be found.' And in Ps. cxii., 'A good man sheweth favour, and lendeth; he will guide his affairs with discretion.' And then in ver. 9, 'He hath dispersed, he hath given to the poor, his righteousness endureth for ever.' Mercy, it is a special part of righteousness. In James iii. 17, the apostle there describing the wisdom that is from above, he saith thus, 'The wisdom that is from above is first pure, then peaceable, gentle, and easy to be entreated, full of mercy, and good fruits.' Mark the words, it is full of mercy and gentleness; therefore blessed are these merciful ones, for they are such as God doth cull out to give a character of, that they are godly men. Secondly, Blessed, because they have so much of that which is so nigh to God, and makes God so excellent and glorious. There is nothing in a saint is nearer unto God than this very disposition of mercifulness. Now God glories in nothing more than in his mercy. This is that which God doth exalt himself withal, and that he doth glory in, that he is the merciful God. In Exod. xxv, the mercy-seat it was raised up on high above all, that it might be seen. And in Scripture God is said to delight in mercy: Micah vii. 18, 'Who is a God like unto thee? that pardoneth iniquity, and passeth by the transgression of the remnant of his heritage; he retaineth not his anger for ever, because he delighteth in mercy.' It is a very pleasing thing for God to delight in his mercy; and he is called the 'Father of mercy,' and a God 'rich in mercy.' A man accounts his glory to consist in his riches. If in anything a man doth esteem himself for, it is in his riches, in his wealth; so God's riches are his mercies, and God glories in his mercies; and when God would shew, unto Moses his glory it is in this, Moses he desires to see the face of God, and that God would let him see his glory, Exod. xxxiv. 6; how doth the Lord give a demonstration of his glory? Thus, 'The Lord God, gracious and merciful.' And the chief design that God hath in the world it is to glorify his mercy. In Eph. i. 6, the Lord he delights to glorify his power, his wisdom, and his justice; but he delights to glorify his mercy above all. When the power of God is exalted, when the wisdom of God is declared, God is glorified; but when mercy is glorified, then God is exalted. If mercy make God so excellent, surely that man must needs be very happy that hath much of this disposition in him. And you have seen that the merciful man he hath much of this disposition in him, which is by God accounted to be his own glory. Thirdly, You are blessed, because you are under many precious promises. It were endless to mention all the promises wherein your blessedness is set forth. In Prov. xi. 25, 'The liberal soul shall be made fat; and he that watereth shall be watered also himself;' Ps. cxiii. 9, 'He hath dispersed, he hath given to the poor; his righteousness endureth for ever; his horn shall be exalted with honour;' 2 Cor. ix. 8, which is very remarkable, 'And God is able to make all grace abound towards you, that ye always, having all sufficiency in all things, may abound to every good work.' See how words are heaped up here: 'to make grace, and all grace, and all grace to abound.' And who is it to? Unto the liberal, the merciful man. In Luke vi. 38, 'Give, and it shall be given to you.' The way for to receive more, it is to give out of what we have; and God will so order it 'that you shall have good measure, pressed down, and shaken together, and running over.' See here the latitude and height of expressions that can be. We account it good measure when it is heaped up; but when it is heaped up and pressed down, that is more; but when it is heaped up and pressed down, and then heaped up and running over again, this is as much as possibly can be made. So those that are of merciful spirits, they shall have mercy heaped up, pressed down, and running over. Surely thou must needs be a happy man when thou canst not be in that condition in which thou shalt not have mercy, but mercy heaped up, and running over, to supply thy necessity. Fourthly, Blessed art thou, because thou hast the blessing of those that are in misery upon thee. The blessing of the poor is upon thee who art thus merciful; thy prayers are heard, and their prayers are for thee. They bless God for such a one who hath done them good in their straits: Job xxix. 13, 'The blessing of him that was ready to perish came upon me, and I caused the widow's heart to sing for joy.' They praise God for them; and in the text, 'they shall obtain mercy.' This is a singular privilege, were there no other scripture in all the word to encourage us to this duty but this, that we shall obtain mercy. We are ready to think that if we shew mercy we may want ourselves, we shall come to beggary, we shall come to poverty, we had need to store up for ourselves. No, we shall grow; therefore in Prov. xi. 25, 'The liberal soul shall be made fat.' Here is a strange expression; what, to gain by liberality? We have many proverbs used among us that doth quite cross Scripture; for we say, 'We had as good be out of the world as out of the fashion;' and God saith, 'Fashion not yourselves according to the world.' We say, 'He is too free to be fat;' and yet God saith here, 'The liberal man shall be made fat.' Saith the Scripture, 'You shall have mercy;' and is it not a sweet thing to find mercy from God? In 2 Sam. xxii. 26, 'With the merciful he will shew himself merciful;' and therefore 'blessed are the merciful, for they shall obtain mercy.' With the froward God will shew himself froward. According to our walking unto God we shall find God walking unto us: if we walk contrary unto him, he will walk contrary unto us; if we walk mercifully towards our brethren, God will walk mercifully towards us. Fifthly, All the good that we have, it comes from the mercy of God; there is not the least good that we enjoy in any creature but it comes originally from God's mercy. Saith God, Poor soul, thou art of a merciful disposition. Art thou merciful? Dost thou do good to others, and doth thy bowels work towards them that are in misery? Art thou in straits thyself? Here is my mercy to help thee, here is my mercy to pardon thee. It is very observable that those that God intends to save, he doth so work upon them by his grace here as they shall be like him. There shall be such a work wrought upon them to answer God's will in all things. As, to instance, those that God intends to save, they shall choose him here; as those whom he hath elected unto glory, they shall in time choose him here, and elect him. Those that God doth intend to justify by Jesus Christ, they shall justify him and his ways; those that God hath separated for glory hereafter, they shall be separated from the world here; and those that God doth intend to shew mercy to hereafter, shall be of merciful dispositions. Hath God given thee a merciful heart? thou mayest assure thyself that God will shew mercy to thee at the last. Blessed are the merciful, therefore, for they shall have mercy; they shall have sin pardoned, they shall have their souls blessed. This is a blessed and a fruitful promise; for have not we need of mercy in our straits? There is none of us all that enjoy the most of creature comforts here but we stand in need of mercy ourselves; and when we shall come in any condition to stand in need of mercy, we may be sure we shall have mercy from God, because the Lord hath wrought in us merciful dispositions towards them that are in misery. Sixthly, In this very thing thou hast a mighty encouragement and help to faith; for mercy, it is thy own—thou mayest cast thyself upon mercy without presuming. Thou who hast a merciful, loving disposition to the saints in their distress, it is no presuming for thee to cast thyself upon the mercy of God in thy straits. When thou art about to believe, what is the stumbling-block that lies in the way? Saith such a poor soul, Shall such a wretched creature as I have mercy from God? Will the Lord ever look upon me? Lord, thou mayest answer thus: Thou hast wrought in me a disposition to shew mercy to them that are in misery. Lord, if there be but one drop of mercy in me to shew pity to others, is there not an infinite ocean of mercy in thee? Lord, is it not much easier for thee to shew mercy unto me, whenas by that little drop of mercy which I have thou hast gained upon my heart to shew mercy unto others? Here is a mighty help against temptations and discouragements from closing with the mercy of God; for that mercy which is in us is but a drop of the fountain that is in God. Our mercy, if it be true and spiritual, as you have heard it described before, it is but an effect and fruit of the mercy which is in God himself. Lord, it is more easy to thee to shew mercy to my soul than for me to pity them that are in misery. Lord, the misery that is in others requires more of us to relieve them than for thy majesty to relieve us. Lord, thou shalt part with nothing in shewing mercy to me. Thou art infinite in mercy, and thou partest with nothing; but when we shew mercy we part with something, though it be that we receive from God; and therefore it is easier with God to shew mercy. Lastly, Consider of this, That there is nothing holds men longer under bondage and terrors of conscience for sin than this very thing, than the rigid disposition that is in us towards them that are in misery. Therefore blessed are those that are merciful, that are of a gentle disposition, for this will be a special means to have those throbs and terrors of conscience that are inward in the soul to be removed. We are ready oftentimes to gather such conclusions as these are: Surely the Lord will never be merciful unto me. How can God shew mercy to such a wretch as I am, so stubborn and hard-hearted? I cannot shew mercy to others that are in misery; and surely how can the Lord forgive me, who have done more wrong to him than ever any other hath done to me, and yet I could not forgive them, nor pass by such wrongs myself? Well, thou that art merciful mayest think thus: Lord, must I have a heart to forgive to seven times, yea, to seventy times seven? And, Lord, canst not thou do more to me? Must I forgive till seventy times seven times in a day if my brother offend me? Canst not thou forgive much more? This is a mighty help to faith, and a mighty help to prayer, that the Lord would shew mercy to us in our straits, and help in the time of our troubles: Ps. cxiii. 6, 7, 'Surely he shall not be moved for ever.' The way to be established, it is to be of a merciful spirit, and he shall not be afraid of evil tidings; let what times come that will come, he shall not fear them. The days may be clouded, and troubles may grow bigger, but he shall not be afraid of them. These evil tidings shall not affright the merciful man; and that is a famous text that we have in Isa. lviii. 7, 8, when he describes the manner of the fast both in the negative and the affirmative part. He shews what they did in their false humiliations, and then he comes to shew that if they did thus and thus, 'Then shall thy light break forth as the morning, and then shalt thou call, and the Lord shall answer; thou shalt cry, and he shall say, Here am I.' God will say, Hearken, there is a merciful man cries; there is one that is now in distress and cries to me. I must go down and hearken unto this man's request; I must go and hear what is the matter, it is a merciful man cries. Come, God will say, here am I, call upon me; what wouldest thou have? It is a merciful man that cries, I must go and relieve him. God will say to this soul, Here I am; and ver. 10, 'The light of such a man shall rise in obscurity, and his darkness be as the noonday;' and ver. 11, 'The Lord shall guide thee continually, and satisfy thy soul in drought, and make fat thy bones, and thou shalt be like a watered garden, and like a spring of water whose waters fail not.' Thou complauest of deadness and barrenness of spirit: this is the reason, it may be thou profitest no more under the means, because thou art of a wretched, harsh, cruel disposition. But for the merciful, they may go unto God and plead their cause, and say, Lord, I was merciful unto my brethren in their straits, and my mercy it was in obedience to thy command, and therefore, Lord, hear me. To make application of this point. First, Here is abundance of comfort to those that are of merciful spirits. Whoever you are that are thus merciful, wherever you are, (though I fear there are but few; like the gleanings after the vintage, they stand but here and there even in great assemblies,) hearken unto your comfort. Hath the Lord drawn forth your hearts to melt at the sorrows of the saints abroad, though you have had plenty at home, yet you have been in bonds with them, and your comforts have not been so sweet to you as otherwise they would have been, because the church and people of God have been in such straits? You have been in sorrow; though you have enjoyed peace and plenty, this hath taken away the sweetness of your mercies. Know, if it be thus, take your comfort: First, Thou art eminent in that which is God's eminency; and this is a great excellency. And this is the best service thou canst do; thou canst not do a piece of service more acceptable to God than this thing is. Thou complainest thou canst not pray; thou art disquieted in thy spirits for thy deadness, and dulness, and indisposedness of heart; but hast thou a merciful heart? Know that this is most acceptable to God: Micah vi. 6, 7, 'Wherewithal shall I come before the Lord, and bow myself before the high God? shall I come before him with burnt offerings, with calves of a year old? will the Lord be pleased with thousands of rams, or with ten thousands of rivers of oil?' See what large profiers they made there to God; shall we come with these? 'Shall I give my first-born for my transgression, the fruit of my body for the sin of my soul?' No, saith God, none of these; I regard them not, I require them not, 'only to do justice, and to love mercy;' herewithal mayest thou come before God with boldness. It may be thou canst not bring rivers of oil, thou canst not bring such enlargements, such expressions, such fine placed words, yet canst thou bring a heart loving mercy; hast thou but a merciful heart, thou hast that which God delights in. Secondly, This is a most certain argument of thy election unto mercy who hast a merciful heart: Col. iii. 12, 'Brethren, as the elect of God'—what? 'put on bowels of mercy;' as the elect of God put on bowels of mercy. It is mercy that God gives thee means to relieve others, that God gives thee wherewithal to help them that are in distress. Know it is more to have a heart to shew mercy than an estate to shew mercy. It is a greater mercy to thee for God to make thee willing to shew mercy, than if thou hadst an estate and not willing to shew mercy. And therefore, wherein do you account your riches? In having the world at will, in being in great places, and to do what thou wilt, is here thy happiness? Dost thou account it thy riches to be great in the world, and to have places and rule? If this be thy happiness, know that thou hast little evidence to thy soul of thy election. But if thou wert truly gracious, thou wouldest say, Lord, I bless thee for my estate, for my parts and riches. Ay, but Lord, I bless thee more for a heart to pity them that are in distress; I bless thee that thou hast given me a heart to shew mercy to them that are in misery; and I bless thee that I may be more serviceable than others by my estate to them which want such an estate. I therefore prize my estate because it doth help me to be more serviceable to God than others: this is as sure a sign of grace as can be. Suppose God hath given you an estate, but withal had left you to a penurious, covetous heart, know thy estate had been a curse to thee; but if thou hast a large estate, and a large heart to do good with thy estate, it is a good sign of true grace. Thirdly, Thou mayest with comfort expect an enlarged heart in prayer. You complain many times that your hearts are so straitened and dead; would you but examine, is not this the cause, you are so cruel to others? And when thou comest to any affliction, the Lord will remember, and remember what thou wouldest have done, James ii. 13. Thou wouldest pray better; the Lord will accept of that desire of thine to pray better: 'Mercy rejoiceth against judgment.' There is a scripture which, though you have often read, you do not, it may be, so well understand, or at leastwise it hath been carried contrary to what I conceive the meaning is. Many conceive this scripture to be meant of the mercy of God rejoicing against the judgment of the law and condemnation; but I take it for judgment here—judgment is coming, mercy strives against. And how the Scripture saith, 'That a man shall have judgment without mercy, that was cruel.' When any judgment comes to be executed upon a kingdom, upon families, the mercy of those towards such as were in misery shall cry, and the Lord will hear the cries of mercy in the time of judgment; the mercy which they had shewn to others shall plead for them. Let whatsoever judgments come, that soul may say, the Lord intends mercy to me in it; this merciful man shall be delivered. Though there is a storm abroad in the land, and miseries in all places, yet the Lord will remember this man; he was merciful to them that were in misery, and I will regard this man; his mercy shall come up into remembrance, and say, I am above judgment. A merciful man, he may rejoice in the midst of judgment as being above judgment. The Lord hath discovered himself to me in making me of a merciful disposition to others; therefore, now the judgments of God are abroad, I question not but mercy will triumph over judgment. For me, I shall be preserved; my mercy will plead for me that judgment shall not take hold of me, because, when others were in misery, I was pitiful unto them: 'And therefore, blessed are the merciful, for they shall obtain mercy.' In their troubles the merciful man shall triumph and boast over judgment. Judgment shall not take hold of him, because his mercy shall be remembered in the day of his trouble.
A method of disinfection of foodstuff involving the use of hydrogen peroxide in combination with anti-microbial agents selected from the group consisting of benzoic acid, and phosphoric acid, at low concentration, to substantially reduce the microbial count in food-related application. 6 Claims, No Drawings METHOD OF DISINFECTION FIELD OF THE INVENTION The present invention relates to a method of disinfection in food processing involving the use of hydrogen peroxide in combination with anti-microbial agents selected from the group consisting of benzoic acid and phosphoric acid to reduce the microbial count in food-related applications. BACKGROUND OF THE INVENTION Disinfection is a worldwide problem within the foodstuff industry and numerous efforts have been made with additives to attempt to reduce the microbial load on fresh muscle foods. Some of the techniques used have been by chilling or dipping the products with antimicrobial compounds. However, the use of antimicrobial compounds has been limited due to its efficacy as well as cost. U.S. Pat. No. 3,792,177 to Nakatani et al. relates to a method for improving the quality of foodstuff by the addition to the foodstuff of a mixture of a water soluble metal phosphate-hydrogen peroxide adduct and a water soluble acid metal phosphate, the ratio of said adduct to said metal phosphate being about 1 part by weight to from about 0.5 to about 9 parts by weight. U.S. Pat. No. 4,915,955 to J. Gomori relates to a process for preparing a storage stable concentrate comprising admixing (i) an inorganic acid such as 75% phosphoric acid, 65% aqueous nitric acid or 69% aqueous sulphuric acid in water with (ii) a silver composition selected from silver salts and silver salt complexes and (iii) an organic acid stabilizer selected from e.g. tartaric acid and/or citric acid. EP B1 87 049 relates to a disinfectant for hospitals, schools, breweries, laundries, etc. comprising a composition comprising 1–15 % H₂O₂, 1–30% phosphorous compound, 0.1–3% metal chelating agent, 0–20% surfactant and the rest water. CA 1 146 851 relates to a composition for disinfection of dental and medical equipment by the use of a composition comprising H₂O₂, Tetronic 908 and H₃PO₄, benzotriazole, Acitrol and deionized water. U.S. Pat. No. 5,264,229 relates to a commercial process for extending the shelf life of poultry and seafood by introducing food grade H₂O₂ and foodgrade surface active agents into the chiller water to wash off bacteria on the surface of the food product. The agents are alkylaryl sulfonates, sulfates, sulfonates of oils and fatty acid, sulfate of alcohols and sulfosuccinates. Chen et al. reported 1973 that immersion of fresh poultry meat into ice water containing polyphosphates extends the shelf life of the meat and that the immersion of fresh chicken parts in a 3% polyphosphate solution control gram-positive micrococci and Staphylococci, but that certain gram-negative organisms tolerated the addition of 1–6% phosphate. Foster and Mead reported 1976 that Salmonella growth in minced chicken breast meat was inhibited by using 0.35% polyphosphate solution followed by storage at −2° and −20° C. The term “shelf-life” usually refers to the period of quality deterioration by decreasing nutritional value, color changes, development of off-flavors, and/or textural changes occurring during storage. Microbial spoilage that results in physical and chemical changes is one of the principal factors responsible for the relatively short shelf-life of muscle foods. Said prior art, however, say nothing about the findings which constitute the basis for the present invention, namely that a combination of hydrogen peroxide and an antimicrobial agent selected from benzoic acid and phosphoric acid, and combinations thereof, at low concentration effectively reduces the microbial count on foodstuff, especially fresh muscle foods. DESCRIPTION OF THE INVENTION The present inventor has surprisingly found a synergistic antimicrobial effect, when hydrogen peroxide is used in combination with antimicrobial compounds selected from the group consisting of benzoic acid and phosphoric acid and combinations thereof. Said findings permit the use of low concentrations of hydrogen peroxide and the selected antimicrobial compounds allows the application dosage, due to the strong germicidal synergistic effect as obtained, to be cut down drastically. The consequences of the reduced dosages are e.g. a better effect, a better acceptance of hydrogen peroxide as sanitizing agent as well as decreased costs for preservation. The antimicrobial combination according to the invention can be applied to the foodstuff by spraying, dipping, painting or in any other way known to the man skilled in the art. The antimicrobial composition is a mixture obtained by mixing the compounds as included. The temperature as used during the application process is preferably from +25° F. (−4° C.) to +90° F. (+32° C.), most preferably ice water temperature. The concentration of hydrogen peroxide in the composition is preferably between 0.001 to 0.1%, most preferably between 0.005 to 0.035%, and the concentration of the other anti-bacterial agent is preferably between 0.001 to 0.5%, most preferably between 0.005 to 0.1%. Phosphoric acid is the only inorganic acid that is widely used as a food acidulant and a comparatively cheap all food grade acidulant. It is also a strong acid giving a low pH. Phosphoric acid and its salts are categorized as GRAS (generally recognized as safe) compounds by the FDA. It has been indicated that enzymatic activity can be obstructed at pH values under 3.0 with the addition of acid. Phosphoric acid used in food manufacturing such as carbonated and non-carbonated drinks, can lower the pH values to 2.5–3.3. Phosphoric acid besides being used in soft drinks is also used in cheeses and brewing products to adjust the pH. Benzoic acid is an aromatic carboxylic acid with the formula C₆H₅COOH. Benzoic acid is known as a preservative of foods. By the present invention it has, however, surprisingly been shown experimentally that the combination of hydrogen peroxide and phosphoric acid or benzoic acid gives superior disinfection of food compared to combinations of hydrogen peroxide with phosphate or pyrophosphate. When applied to chicken parts or poultry meat, a hydrogen peroxide and phosphoric acid combination as well as a hydrogen peroxide and benzoic acid combination demonstrated a more superior antimicrobial effects than combinations comprising hydrogen peroxide in combination with L-ascorbic acid, sodium pyrophosphate, sodium tripolyphosphate, or trisodiumphosphate as shown in Table 2. The synergistic antimicrobial effect of hydrogen peroxide and benzoic acid (Fisher Scientific Company, A-65), as well as hydrogen peroxide and phosphoric acid (85%, Fisher Scientific Company, A-242) on poultry chilling water microorganisms were tested. The poultry meat microbial suspensions containing approximately $10^4$ CFU were prepared by mixing 1 ml of the poultry wash water with a form of nutrient agar to allow bacteria colonies to form on the plates for later "colony forming unit" (CFU) counts. In addition, a factor of ten (10) serial dilutions was also made for each test in the event that bacterial formation might be "too numerous to count" (TNTC). Lower log numbers of bacteria found by the plate counting method generally indicate a greater degree of food disinfection, and a higher potential for increased shelf life due to the reduction of slime-forming microorganisms. Although their is no current industry standard as to an acceptable log number of bacteria found on food for human consumption, such results are useful in the analysis of food disinfection data for comparison purposes. The bacterial reduction effect of hydrogen peroxide, and the two other anti-microbial agents were studies, both individually and in combination. The antimicrobial agents used in this study were selected due to their status as accepted GRAS compounds from the Food Additives Handbook published by CRC. Hydrogen peroxide (35%) and the selected additives were used individually or in combinations. Microbial suspensions (50 ml) were randomly assigned to one of the following treatments: (1) non-treated controls; (2) hydrogen peroxide at 0.035%; (3) hydrogen peroxide combined with selected additives. The length of the treatment time was 30 min. The final concentration of the benzoic acid in the microflora suspensions was 0.1% and the concentration of the phosphoric acid was 0.085%. Total plate counts were conducted immediately after each treatment. Serial dilutions of the mixture were plated and incubated at 30° C. for 48 hrs. The results of the tests are presented in the attached data tables. Our research data indicated that there is a synergistic effect between low concentrations of hydrogen peroxide and the antimicrobial compounds as used according to the invention. This strong germicidal synergistic effect on food microorganisms could drastically cut the antimicrobial compound application dosage, as well as the cost of preservation. The germicidic effect of hydrogen peroxide has been well recognized. Our data has indicated that the germicidic effect of hydrogen peroxide can be improved by the synergistic effect with the antimicrobial compounds according to the invention. Therefore, the acceptance of hydrogen peroxide as a sanitizing agent by consumers could be enhanced. The synergistic effect of hydrogen peroxide and phosphoric acid on Shelf-life of chicken breast fillet and leg quarters Commercial type broiler carcasses cut into quarters were obtained from the Mississippi State University poultry processing plant. The cut-up parts were randomly assigned to one of two treatments as follows: (1) ice water and broiler parts only (4:1 ratio) as a control group, (2) 0.17% phosphoric acid and 0.07% hydrogen peroxide added to broiler parts (4:1 ratio). Each treatment consisted of two replications. Each replication contained 5 leg quarters and 5 breast fillets. The amount of solution used was calculated on a weight basis as four times that of the chicken parts. Samples were kept submerged in the well mixed solutions for 30 min. Treated samples were packed in plastic poultry bags and placed in a 40° F. refrigerator. Total plate counts were conducted at day 0 and repeated every day through 20 days of storage. Measurement and Analysis - Total Plate Counts (TPC) A. For poultry carcasses Equal quantities, by weight, of sterile 0.1% peptone solution and poultry carcasses were placed in a sterile plastic bag and shaken vigorously for 1 min. Using a "pour-plate" method (APHA, 1976), serial dilutions of the samples were plated onto standard plate count agar (Difco) and incubated at 30° C. for 48 hr. The colony numbers were averaged from duplicate plates and reported as log CFU/g sample. The results of the tests are presented in the attached data tables. ### TABLE 1 Synergistic Effects of Hydrogen Peroxide and the Antimicrobial Compounds according to the invention on TPC of Poultry Meat Wash Water | Treatment | R1 | R2 | R3 | R4 | R5 | Overall Mean | |----------------------------|------|------|------|------|------|--------------| | Control | 4.59 | 3.78 | 3.85 | 4.22 | 4.62 | 4.21 B | | H$_2$O$_2$ | 3.30 | 2.80 | 2.82 | 2.77 | 2.98 | 2.93 A | | H$_2$O$_2$ | 3.30 | 2.80 | 2.82 | 2.77 | 2.98 | 2.93 B | | Benzoic acid | 1.70 | 2.34 | 2.65 | 1.97 | 3.61 | 2.45 B | | Benzoic acid + H$_2$O$_2$ | ND | −0.30| ND | 0.18 | −0.02| A | | H$_2$O$_2$ | 3.30 | 2.80 | 2.82 | 2.77 | 2.98 | 2.93 C | | Phosphoric acid | 2.24 | 2.08 | 1.79 | 1.51 | 2.33 | 1.99 B | | Phosphoric acid + H$_2$O$_2$| ND | ND | 0.30 | ND | 0.18 | 0.10 A | 1The chicken wash water has been kept in a refrigerator for more than 30 days. 2Each mean represents the mean of 2 observations. 3A–C, means in the same column not followed by the same letter are significantly different (P < .05). 4ND = non-detectable ### TABLE 2 **COMPARATIVE TESTS** *Effects of Hydrogen Peroxide and Commonly Used Antimicrobial Compounds on TPC of Poultry Meat Wash Water* | Treatment | R1 | R2 | R3 | R4 | R5 | Overall Mean | |------------------------------------------------|------|------|------|------|------|--------------| | Control | 4.59 | 3.78 | 3.85 | 4.22 | 4.62 | 4.21 B | | H₂O₂ | 3.30 | 2.80 | 2.82 | 2.77 | 2.98 | 2.93 A | | L-ascorbic acid | 3.49 | 3.37 | 3.84 | 3.62 | 4.58 | 3.78 B | | L-ascorbic acid + H₂O₂ | 1.89 | 0.48 | 1.02 | −0.30| 3.76 | 1.37 B | | Sodium pyrophosphate | 3.30 | 2.80 | 2.82 | 2.77 | 2.98 | 2.93 B | | Sodium pyrophosphate + H₂O₂ | 3.43 | 3.14 | 3.67 | ND | 4.55 | 2.96 B | | Sodium pyrophosphate + H₂O₂ | 0.65 | −0.30| 0.60 | 0.30 | 1.86 | 0.62 A | | Sodium tripolyphosphate | 2.10² | 1.60² | 2.27² | 2.00² | 3.37² | 2.27 B | | Sodium tripolyphosphate | 3.30 | 2.80 | 2.82 | 2.77 | 2.98 | 2.93 BC | | Sodium tripolyphosphate | 3.61 | 3.33 | 3.76 | 3.25 | 4.63 | 3.72 C | | Sodium tripolyphosphate + H₂O₂ | ND | 0.18 | 0.40 | 0.48 | 2.26 | 0.72 A | | Trisodium phosphate | ND | 2.11² | 2.53² | 2.35² | 3.86² | 2.17 B | | Trisodium phosphate | 3.30 | 2.80 | 2.82 | 2.77 | 2.98 | 2.93 B | | Trisodium phosphate + H₂O₂ | 2.86 | 2.83 | 3.35 | 2.71 | 4.23 | 3.20 B | | Trisodium phosphate + H₂O₂ | 1.86 | 0.65 | 0.54 | −0.30| 1.41 | 0.83 A | --- 1. The chicken wash water has been kept in a refrigerator for more than 30 days. 2. When diluted to lower concentration (1:10), sodium pyrophosphate + H₂O₂ and sodium tripolyphosphate have higher number of bacteria. 3. Each mean represents the mean of 2 observations. 4. A-C, means in the same column not followed by the same letter are significantly different (P < .05). 5. ND = non-detectable --- I claim: 1. A method of disinfection of foodstuff comprising treating a foodstuff with a microbial count reducing amount of a composition comprising hydrogen peroxide in combination with an anti-microbial agent selected from the group consisting of benzoic acid, and phosphoric acid, wherein the concentration of hydrogen peroxide in the composition is from 0.005 to 0.035%, and the concentration of the other anti-microbial agent is from 0.005 to 0.1%. 2. A method as claimed in claim 1 comprising the use of a composition consisting essentially of hydrogen peroxide and phosphoric acid. 3. A method as claimed in claim 1 comprising the use of a composition consisting essentially of hydrogen peroxide and benzoic acid. 4. A method as claimed in claim 1 wherein the foodstuff treated is a fresh muscle food of poultry, fish, or other seafood products. 5. A method as claimed in claim 1 wherein the concentration of hydrogen peroxide is 0.035%. 6. A method as claimed in claim 1 wherein the concentration of the other anti-microbial agent(s) is 0.1%.
THE SYLLABLE STRUCTURE IN EUROPEAN PORTUGUESE* (A Estrutura da Silaba em Português Europeu) Maria Helena MATEUS (Universidade de Lisboa - Faculdade de Letras / ILTEC) Ernesto D’ANDRADE (Universidade de Lisboa- Faculdade de Letras) ABSTRACT: The goal of this paper is to discuss the internal structure of the syllable in European Portuguese and to propose an algorithm for base syllabification. Due to the analysis of consonant clusters in onset position and the occurrence of epenthetic vowels, and considering the variation of the vowels in word initial position that occupy the syllable nucleus without an onset at the phonetic level, we assume that, in European Portuguese, the syllable is always constituted by an onset and a rhyme even though one of these constituents (but not both) may be empty, that is, one of them may have no phonetic realisation. RESUMO: O objetivo deste artigo é o de discutir a estrutura interna da silaba em Português Europeu e o de propor um algoritmo para a silabificação de base. Tendo em conta a análise dos grupos de consoantes que ocupam o lugar de ataque e a possibilidade de existência de vogais epentéticas que desfazem alguns desses grupos, e considerando, ainda, a variação de vogais em posição inicial de palavra que constituem núcleo de silaba sem ataque no nível fonético, apresenta-se a hipótese de que a silaba, em Português Europeu, é sempre constituída por um ataque e por uma rima, mesmo que um desses constituintes (mas não os dois) seja vazio. Ou seja, um dos dois constituintes pode não ter realização fonética. Key Words: Syllable; Onset; Empty nucleus; Base syllabification; consonant cluster. Palavras-Chave: Silaba; Ataque; Núcleo vazio; Silabificação de base; Grupo de consoantes. This paper has been presented in the colloquium organised by The Oxford University Press about The Phonology of the World’s languages: The Syllable (Pézénas, France, June 1996). 1. Data 1.1. Consonant clusters In European Portuguese (henceforth, EP), we find many sequences of consonants in word-initial and word-internal position. Examples are in (1)-(3). (1) (a) [pn] - pneu 'tyre' [gn] - gnomo 'gnome' [ps] - psicologia 'psychology' (b) [bn] - obnóxio 'obnoxious' [bs] - absurdo 'absurd' [dm] - admirar 'to admire' [bv] - óbvio 'obvious' [tm] - ritmo 'rhythm' [bʒ] - abjurar 'to abjure' [gm] - estigma 'stigma' [tz] - quartzo 'quartz' [tn] - étnico 'ethnic' [ks] - axioma 'axiom' [pt] - captar 'to capture' [dv] - advertir 'avertir' [kt] - pacto 'pact' [bt] - obter 'to obtain' [mn] - amnésia 'amnesia' [dk] - adquirir 'to acquire' [ft] - afta 'thrush' (2) (a) [pr]¹ - prato 'dish' [br] - branco 'white' [tr] - trapo 'rug' [dr] - droga 'drug' [kr] - cravo 'carnation' [gr] - graça 'grace' [pl] - plano 'plan' [bl] - ablução 'ablution' [tl] - atleta 'athlete' [kl] - claro 'bright' [gl] - glande 'glande' (b) [fr] - frito 'fried' ¹Traditional representation of the tap in Portuguese is [r]. We use the IPA [ɾ] that corresponds to the word-internal and word-final single r. \begin{tabular}{lll} [vr] & - palavra & ‘word’ \\ [fl] & - flor & ‘flower’ \\ \end{tabular} [i] deletion\(^2\) that frequently occurs in colloquial EP in unstressed position, gives rise to other consonant sequences (see (3)). (3) \begin{tabular}{lll} [st] & - estar & ‘to be’ \\ [spr] & - esperar & ‘to wait’ \\ [ds] & - decifrar & ‘to decode’ \\ [sp] & - separar & ‘to separate’ \\ [dvd] & - devedor & ‘ower’ \\ [mrs] & - merecer & ‘to deserve’ \\ [djrg] & - despegar & ‘to take away’ \\ [djprz] & - desprezar & ‘to despise’ \\ \end{tabular} The examples given in (3), caused by the deletion of [i] in colloquial EP, show sequences of three consonants in word-initial position (e.g. devedor [dvdór] - plosive + fricative + plosive) four consonants (e.g. despegar [djrgár] - plosive + fricative + plosive + plosive) and five consonants (e.g., desprezar [djprzár]): sequences of different consonants are thus very frequent in EP at the phonetic level. Unlike those of (2a) and (2b) that are allowed onset clusters, the sequences of consonants exemplified in (1) do not belong to the same syllable. This statement is justified by empirical arguments. For instance, speakers have difficulties to assign the consonants in (1), either one or the two of them, to the coda (C) of the first syllable or to the onset of the second one. This is true when naïve speakers have to break a word into syllables (see Andrade & Viana, 1993b), as for instance when they hesitate between \(ad\)-mirar and \(a\)-dmirar. \(^2\) The traditional representation of this neuter vowel is [ə], like the French schwa. However, contemporary studies in Portuguese phonetics and phonology show that [i] is a more adequate representation either because of its phonetic characteristics (it is a high vowel) or because of phonological processes in Portuguese grammar (see A. Andrade (1992) Reflexões sobre o ‘e mudo’ em Português europeu. Unpublished. Lisboa: CLUL). Furthermore, child productions during language acquisition or mispellings show an inserted vowel between the consonants (e.g. [pinéw] for *pneu* ‘tyre’ [pnéw] or [áfite] for *afia* [áfte] ‘thrush’). Moreover, in child language we often find deletion of the second consonant in allowed onset clusters (e.g. [pátu] for *prato* ‘dish’ or [bêku] for *branco* ‘white’) but we never find deletion of the second element in disallowed sequences like those included in (1); in other languages, on the contrary, we find the loss of the first segment in this last kind of sequences, like in *neumático* (Spanish ‘tire’) or in the pronunciation of *psychology*, in English. Finally, an argument that reinforces our statement that the consonant clusters in (1) do not belong to the same syllable is the fact that, in most dialects of Brazilian Portuguese (henceforth BP), they constitute two syllables due to the insertion of an epenthetic vowel, mostly, [i], as exemplified in (4). (4) - pneu [pi]neu - gnomo [gi]nomo - psicologia [pi]sicologia - absurdo a[bi]surdo - pacto pa[k]to - afia a[fi]ta Notice that consonant clusters in (2), that are allowed onset clusters in Portuguese, never show this inserted vowel in BP. So, for instance, *[pi]rato, *[bi]ranco, *pala[vi]ra are unacceptable (needless to say, the consonant sequences of the words in (3) do not occur in BP as the vowel [i] does not exist in this variety). All these sequences of consonants are specific to EP and are due to phonological processes that do not apply in BP. The differences observed at the phonetic level between EP and BP caused by the existence of these consonant clusters are certainly at the origin of the distinct rhythms of the two varieties. Concerning the examples in (2), the consonant sequences - plosive plus liquid and fricative plus liquid - are typically onset syllables in Portuguese as in the majority of Romance languages, even though clusters with a plosive are much more frequent than those with a fricative, and the same for sequences ending in a tap versus those ending in a lateral. These clusters are in accordance with the Sonority Principle which states that the sonority of the segments that constitute the syllable increases from the beginning till the nucleus and decreases to the end. The proposals about the hierarchy of the segments that constitute the sonority scale are broadly consensual in establishing the following decreasing sonority: vowels (low, medium, high) - glides - liquids - nasals - fricatives - plosives. It is worth to note, however, that the definition of this principle and its relation with the sonority scale is not sufficient to establish the possible sequences for Portuguese syllable onsets. Restrictions to the occurrence of some consonant clusters in onset position occur in all languages: they are language-specific and they are also related to the distance between the members of the sonority scale. This assumption constitutes the basis for the Dissimilarity Condition, which states that it is necessary to postulate, for each language, the value of the permitted sonority difference between two segments in a sequence belonging to the same syllable. Quantifying this difference implies indexation of the sonority scale (as, for instance, that proposed by Selkirk, 1984). A tentative indexation for Portuguese has been presented by Vigário & Falé (1993), who also suggested that in Portuguese sequencial segments in the same syllable may have a certain difference in sonority. Concerning consonant clusters, only plosives or fricatives + liquids have the allowed distance. Thus, adjacent members on the sonority scale can never constitute an onset cluster. According to Harris (1983), the non-adjacency requirement of the two segments represents the universally unmarked case for syllable constituency and thus Portuguese grammar has no costs in this specific case. It is necessary to recall that the Sonority Principle and the Dissimilarity Condition are intended primarily as applying to base syllabification, as shown by many violations of these principles at the phonetic level in different languages. To explain this apparent violation of the Sonority Principle and the Dissimilarity Condition, we hypothesise, then, the existence of an empty nucleus between the consonants belonging to the words in (1) and we propose that this nucleus is not filled at the phonetic level in EP. This means that, in base syllabification, all consonant clusters are licenced as onset syllable (in the sense of Goldsmith (1990) syllable licencing). 1.2 Vowels and diphthongs In Portuguese there are no syllabic consonants. The rhymes of Portuguese syllables always have a nuclear vowel which may be followed by a glide at the phonetic level, thus constituting a falling diphthong. Falling diphthongs may occur in stressed, pre-stressed and post-stressed syllables. (5) (a) Stressed [éj] - queixa ‘complaint’ [ɛj] - papéis ‘papers’ [áj] - pai ‘father’ [ój] - herói ‘hero’ [ój] - boi ‘ox’ [új] - azuis ‘blue (pl.)’ [iw] - viu ‘(s/he) saw’ [éw] - deus ‘god’ [éw] - véu ‘veil’ [áw] - pauta ‘register’ (b) Pre-stressed [ɛj] - queixume ‘complaint’ [aj] - ensaiar ‘to essay’ [oj] - boiada ‘drove’ [uj] - cuidado ‘care’ [ew] - endeusar ‘to divinise’ [aw] - pautar ‘to rule’ (c) Post-stressed [ɛj] - fáceis ‘easy (pl.)’ Nasal diphthongs are quite frequent in Portuguese due to the fact, among others, that they appear in every third person plural of verb forms. Nevertheless, they only occur in word-final syllables, either stressed or post-stressed. 3 There is a small number of words in Portuguese having a diphthong in the penultimate stressed syllable: cãibra [kãjbra] ‘crump’ and dialectal cãibo, cãibas, cãibro ‘different pieces of the oxen-cart’. Because of their exceptionality, cãibra is often pronounced as [kãbra], without the diphthong, and the others have alternating forms without the glide. The word muito [mujtju] is the only one that presents the [uj] diphthong and that is the reason why it is included in (6). Also, some words that can be reanalised by speakers as compounds (like bendito [bẽj+ditu] (Cont.) (6) (a) Stressed \[ \text{[a]} \] - mãe 'mother' \[ \text{[a]} \] - refém 'hostage' \[ \text{o} \] - compões '(you) compose' \[ \text{u} \] - muito 'much' \[ \text{e} \] - mão 'hand' (b) Post-stressed \[ \text{[a]} \] - prendem '(they) arrest' \[ \text{[a]} \] - falam '(they) talk' \[ \text{[a]} \] - homem 'man' \[ \text{[a]} \] - sótãos 'garrets' In most of the falling diphthongs, the phonetic glide is, phonologically, an underspecified vowel that has to be lexically marked as a trough (see Andrade & Laks, 1991). Both elements of these diphthongs - either oral or nasal - belong to the syllable nucleus. An argument to sustain this statement is the fact that, in nasal diphthongs, both segments are nasalised by the projection of the nasal autosegment to the nucleus. The underspecified fricative /S/ is the only consonant that can belong to a rhyme having a diphthong. In (7) we see the syllabic representation of the words má [má], 'bad (fem.)', pai [páj], 'father' and mae [máj], 'mother'. (7) 1.3. Sequences of glides + vowel at the phonetic level Sequences of glide and vowel at the phonetic level are included in (8): (Cont.) 'benedict' or Benfica [bén-fiká] ) and the very frequent word também [tám-bém] can be pronounced with a diphthong in the penultimate syllable. Sequences of glide and vowel at the phonetic level are included in (8): (8) (a) Stressed [jé] - frieza 'coldness' [wí] - suíno 'pig' [jé] - viés 'bias' [wé] - roer 'to gnaw' [já] - real 'royal/real' [wé] - cuecas 'pants' [jé] - criança 'kid' [wá] - voar 'to fly' [já] - pior 'worst' [wó] - suor 'sweat' [jó] - mioma 'myoma' [wó] - voou 's/he flew' [jú] - miúdo 'kid' [wé] - coentros 'coriandre' (b) Unstressed [je] - realeza 'royalty' [jé] - adiantar 'to advance' [ju] - miudeza 'minuteness' [wí] - suinicultura 'pig breeding' [wá] - voador 'flyer' The same glides can precede diphthongs: (9) [jáj] - criais '(you) create' [wáj] - recuai 'put back (imperat.)' [jéj] - fiéis 'faithful (pl.)' [wéj] - cruéis 'cruel (pl.)' [jew] - leão 'lion' [wéj] - voei '(I) flew' Phonetic glides preceding vowels raise more problems even for the phonetic description. When we spell out words like viés 'bias', suor 'perspiration', farmácia 'pharmacy' (see (8)), the [+high] segment preceding a [-high] vowel, either stressed or unstressed, is perceived by Portuguese speakers as syllabic, that is, a vowel and not a glide. This is confirmed, for instance, by the traditional classification of the word farmácia as a proparoxiton which indicates that two syllables are counted following stress. Within a structuralist approach, these segments (e.g. p[ia]r ‘to cheet’ / p[i]o ‘cheet’, s[uə]r ‘to sweat’ / s[ú]o ‘I sweat’). In the SPE framework, these segments are underlying vowels (cf. Mateus, 1975). In colloquial Portuguese, however, these two vowels, /i/ and /u/ when unstressed and before a vowel, have a reduced duration and intensity, and they can be perceived by the speakers as belonging to the same syllable as the following vowel. This variation is common to a large number of languages. Consequently, in casual speech glides may be followed by any vowel (with some phonetic restrictions). The examples in (8) and (9) show that, when these phonetic glides occur before either a nasal vowel or a nasal diphthong, they are not nasalised (cf. [já] - criança ‘kid’ and [jew] - leão)\(^4\). This is enough evidence to consider them as independent of the syllabic rhyme (see Andrade et Viana, 1993a, and also Mateus, 1993), and to allow us to interpret them as vowels. Thus, even if they are perceived at the phonetic level as glides by the speakers and constitute a rising diphthong, they are syllable nuclei at the base level. These sequences of glide and vowel at the phonetic level are thus very different from the true rising diphthongs existing in other languages, where glides are associated with the following vowel and integrate the rhyme (see for instance Harris, 1983, for Spanish). 1.4. Codas Consonants /R/, /L/ and /S/\(^5\) are usually considered the only ones that can occur in Portuguese syllable coda. They are underspecified autosegments with different realisations. Examples are in (10a) and (10b). (10) (a) par /paR/ [pár] ‘pair’ mal /maL/ [má] ‘evil’ más /maS/ [má] ‘bad (fem.pl.)’ \(^4\) According Luis-Carlos Cagliari, in BP the glide preceding a nasal vowel is nasalised in many cases and dialects. \(^5\) We use capital letters to indicate underspecified segments. (b) parte /paRte/ [párti] ‘part’ falta /faLta/ [fáite] ‘fault’ peste /peSte/ [péSti] ‘plague’ mesmo /meSmo/ [mé3mu] ‘same’ There is enough evidence to consider these three segments as the only ones that can occur in syllable coda: - [r] is not allowed word-initially; [l] never begins a word if followed by another consonant; - [S] or [3] resulting from the phonetic realisation of /S/ followed by another consonant trigger voicing assimilation; they may also be placed at the beginning of the word without being preceded by any vowel at the phonetic level (cf. esvaído and esperado in (11)). (11) esvaído [3veídu] ‘fainted’ esperado [spirádu] ‘expected’ inesperado [iníspirádu] ‘unexpected’ feliz [filíʃ] ‘happy’ infeliz [i-filíʃ] ‘unhappy’ In this case, however, /S/ is preceded by an underlying vowel, and the existence of this vowel is attested by words like inesperado (resulting from syllabification of the word esperado when the prefix /iN/ is added): the underlying vowel is the nucleus of the first syllable; the nasal autosegment of the prefix /iN/ fills the onset of this syllable and is phonetically manifested as a nasal consonant. On the other hand, if the word begins with a consonant (like feliz, see (10b)) the nasal autosegment of the prefix will be associated with its nucleus, as it happens in infeliz, [i-filíʃ], and the nasality will spread over the vowel. See the representation in (12) and (13). In sum, the three segments /R/, /L/ and /S/ are the only licensed consonants in Portuguese codas. As in most languages (cf. Goldsmith, 1990), consonants licensed in coda position are fewer than those that can occur in the first half of the syllable; in Portuguese their number is reduced to 3. The realisation of these underspecified segments is the result of a phonological process sensitive to the phonetic context. 1.5. Alternations (diphthong oral and nasal / single vowel) The syllabic hierarchical organisation at the base level raises the problem, among others, of whether all segments of the phonetic level are associated with a skeletal position. Let us see other data about diphthongs. In Portuguese there is no difference between long and short vowels. Diphthongs, however, seem to have different weights, and this difference has consequences in the number of skeleton positions they occupy. We think that the constraints on the occurrence of diphthongs should be analysed in relation with the stressed syllable in order to establish their different 'weights', if there is any. It is what we are doing now. We observed above that, in Portuguese, there are strong restrictions for the occurrence of diphthongs in post-stressed position (see examples in (14)). (14) \[ \text{[ʃa]} \] - fáceis ‘easy (pl.)’ \[ \text{[wa]} \] - sótão ‘garret’ \[ \text{[la]} \] - homem ‘man’ \[ \text{[wa]} \] - prendem ‘(they) fast’ \[ \text{[wa]} \] - falaram ‘(they) have talked’ \[ \text{[wa]} \] - pairam ‘(they) soar’ If the penultimate syllable is stressed and has a diphthong, restrictions are stronger and the only diphthong that can occur in post-stressed position is a nasal one. This only happens in verb forms, and the diphthong is the realisation of the third person plural suffix (e.g. pairam, cf. (15)). In fact, the glide of final unstressed diphthongs, either in verbal ending or in words like homem, is ephenthetic and it is not, as in sótão, the phonetic realisation of a class marker. In this case, diphthongs are light in Portuguese and they occupy one position in the skeleton. In (15) we show the syllabic representation of pairam and of sótão in (16). The difference between the two representations lays in the number of skeletal positions for the diphthong in the last syllable. There is another kind of diphthongs that can also be viewed as light. See in (17) the morphological alternations between the lexical representations of *passear* /pase+ar/ pas[i]ar/pas[j]ar ‘to walk’ and *passeio* /pase+o/ pas[ɛ]o ‘walk’ or between *areal* /are+al/ ar[i]al/ar[j]al ‘beach’ and *areia* /are+a/ ar[ɛ]a ‘sand’. (17) /pase+ar/ pas[i]ar/pas[j]ar ‘to walk’ /pase+o/ pas[ɛ]o ‘walk’ /are+al/ ar[i]al/ar[j]al ‘beach’ /are+a/ ar[ɛ]a ‘sand’ As we see in (17), Portuguese, similar to other languages cited above, shows the same alternation light diphthongs / single vowel related to morphological alternation (e.g. French *voir* / verrons or Spanish *poder* ‘to can’ / puedo ‘I can’): the glide is introduced in the segmental tier as a consequence of word-formation with the addition of the morphemic vowel. In this case, the resulting diphthong occupies a single position in the skeleton. (18) 1.6. Empty onset positions As there are segments that do not have a proper position in the skeleton, there are also positions that are not associated with any segment. This statement allowed us to assume the existence of empty syllable nuclei. We also propose that, in Portuguese, any syllable is obligatorily constituted by an onset and a rhyme. If a position corresponding to a constituent is not filled, this fact can have consequences at the phonetic level. It is generally recognised that syllables always possess a rhyme (with its nucleus). Concerning the onset, we propose that its presence in Portuguese is also obligatory, that is, every base syllable in Portuguese consists of an O and a R even though any one of them (but not both) may be empty. There is an interesting evidence that can support our proposal about empty onset positions. (19) (a) Elvira [ɛ]lvira ‘Elvire’ elefante [i]lefante ‘elephant’ ermida [i]/[e]rmida ‘hermitage’ esperado [ɛ]perado ‘expected’ (b) olhar [o]/[ɔ]lhar ‘to look’ ornar [o]/[ɔ]rnar ‘to adorn’ Unstressed underlying vowels /e/ and /ɛ/ are phonetically [i] in EP in word-final and word-internal position. However, in word-initial position, [i] does not exist. Underlying /e/ and /ɛ/ occur as: a) ɛ] when the coda is /L/ (see Elvira); b) as [i] when the rhyme has no coda (see elefante); c) there is some variation between [i], [e] and [E] when the coda is an /R/ (see ermida); d) they are deleted when the coda is an /S/ (see esperado). Examples are in (19a). According to our proposal, this exceptional behaviour is due to the fact that these word-initial syllables have an empty onset: the empty position does not allow the presence of an [i]. The same happens with unstressed underlying /o/ and /ɔ/ that are [u] in every context except word-initially where there is a variation between [o] and [ɔ] (examples are in (19b)). In the representation of *ermida* we can see the empty onset position. (20) ``` σ O R N X X X c R m ``` ``` σ O R N X X X i d a ``` 2. Base syllabification: conventions The most adequate way to build up syllable structure in Portuguese is the usually called ‘all nuclei first’ approach, starting with constructing the rhymes in accordance with the restrictions of the language (see Goldsmith, 1990) about different proposals for base syllabification). This means that we consider rule-based algorithms more adequate than template-matching algorithms (see Blevins, 1995). It is necessary to formulate an algorithm that associates all X assigned to [-cons] segments with a nucleus (N). Association with a nucleus automatically builds up the rhyme (R). It is worth to recall that the phonetic glides of the falling diphthongs are [-cons] segments and are lexically marked as troughs. (21) **Nucleus Association Convention** (a) Adjoin to a N(ucleus) all [-consonant] X as long as they are not lexical troughs preceded by another [-cons]. The remaining fully specified consonants that are not integrated in the syllabic structure, either word-initially (as /p/ in *pnew*) or word-internally (as /f/ in *afia*), after the application of (23), will not be associated with any constituents of the syllable. The existence of a ‘non-associated’ consonant gives rise to the introduction of an empty nucleus position. (25) **Empty Nuclei Creation Convention** Leftwards of an O, insert a N with the corresponding skeletal positionf to the right of a non-associated segment if it is a fully specified consonant and to its left if it is an underspecified segment: (26) ``` R O R N | | X X X X p n e U ``` ``` O R R O R | | | | | N N N N X X X X X a r t a ``` ``` O R O R | | | N N X X X X X s t a R ``` The non-associated consonants can now associate with an onset, as they are followed by a (empty) nucleus, by the re-application of (23). When there is a diphthong followed by a vowel (e.g. *areia*, see (18), or *saia* [sáje] ‘skirt’), the glide can associate with the onset of the following syllable (an empty onset) and it becomes then ambisyllabic. See the representation of *areia* ‘sand’ in (27) and *saia* in (28). (27) ``` σ O R | | N X X a ``` ``` σ O R | | N X X r ``` ``` σ O R | | N X X J ``` ``` σ O R | | N X X a ``` If the consonants are underspecified, that is, /R/, /L/ or /S/, (those that can occur in Portuguese codas), they remain non-associated and become floating segments. At the end of base syllabification, these floating segments are assigned to the codas of the preceding rhyme. (29) Coda-Association Convention Assign the floating X [+cons] to the coda of the preceding rhyme. Thus, base syllables in Portuguese are CV syllables, despite apparent violations at the phonetic level in EP. It is worth to note, as a consequence of the statements made above, that what is traditionally considered as a ‘hiatus’ (two adjacent vowels as, for instance, in *boa* [bóa] ‘good (fem.)’) is in fact a sequence of two vowels separated by an empty onset at the base level. We consider that our approach, involving rules of syllabification that apply in an ordered fashion, is better than other approaches so far developed for syllable with respect to Portuguese. It is clearly empirically adequate as it accounts for the oral and nasal falling diphthongs and the consonant clusters in European Portuguese. Moreover, it is in accordance with our proposal of floating codas. (Recebido em 15/01/97. Aprovado em 05/03/97) References ANDRADE, E. d’ (1977) *Aspects de la Phonologie (Générative) du Portugais*. Lisbon: INIC. ___________ & A. KIHM (1986) Fonologia Auto-Segmental e Nasais em Português. *Actas do 3º Encontro da Associação Portuguesa de Lingüística*. Lisbon (1987). ___________ & B. LAKS (1991) Na crista da onda: o acento de palavra em Português. *Actas do 7º Encontro da Associação Portuguesa de Lingüística*. Lisbon (1992): 15-26. ___________ & M.C. VIANA (1993a) Sinérese, diérese e estrutura silábica. *Actas do 9º Encontro da Associação Portuguesa de Lingüística*. Coimbra (1994): 31-42. ___________ & M.C. VIANA (1993b) *As sobras da translineação*. EPLP. Lisbon: Fondation C Gulbenkian: 209-14. BARBOSA, J. Morais (1965) *Études de phonologie portugaise*. Lisboa: Junta de Investigações Científicas do Ultramar (2nd ed., Universidade de Évora, 1983). BASBØLL, H. (1988) Phonological Theory. In: F. Newmeyer (ed.) Vol. 1: 192-215. BLEVINS, J. (1995) The Syllable in Phonological Theory. In: J. Goldsmith (ed.): 206-44. BISOL, L. (1989) O ditongo na perspectiva da fonologia atual. *D.E.L.T.A.* (São Paulo, Brasil), Vol. 5, nº 2: 185-224. _________ (1994) Ditongos derivados. *D.E.L.T.A.* (São Paulo, Brasil), Vol. 10, nº 2: 123-40. DELGADO-MARTINS, M. R. (1994) Relação Fonética/Fonologia: A propósito do sistema vocalico do Português. *Actas do Congresso Internacional sobre o Português. Vol. I*: 311-25. Lisbon (1996). GOLDSMITH, J. (1990) *Autosegmental and Metrical Phonology*. Oxford: Basil Blackwell. _________ (ed.) (1995) *The Handbook of Phonological Theory*. Cambridge, Mass.: Basil Blackwell. HARRIS, J.W. (1983) *Syllable Structure and Stress in Spanish*. Cambridge, Mass.: The MIT Press. MATEUS, M.H.M. (1975) *Aspectos da Fonologia Portuguesa*. Lisbon: Centro de Estudos Filológicos (2nd ed. reviewed, Lisbon: INIC, Textos de Lingüística, 6, 1982). _______________ (1993) Onset of Portuguese Syllables and Rising Diphthongs. *Proceedings of the Workshop on Phonology*. Coimbra. _______________ (1994) A silabificação de Base em Português. *Actas do 10º Encontro da Associação Portuguesa de Lingüística*. Évora (1995): 289-300. Mc CARTHY, J. & A. PRINCE (1995) Prosodic Morphology. In: J. GOLDSMITH (ed.): 318-66. MORS, Ch. De (1985) Empty V-Nodes and their role in the Klamath Vowel Alternations. In: H. Van der Hulst & N. Smith (eds.) *Advances in Nonlinear Phonology*. Dordrecht: Foris. NEWMeyer, F. (ed.) (1988) *Linguistics: the Cambridge Survey*. Vol. I. Cambridge: Cambridge University Press. RUBACH J. & G. BOUL (1990) Edge of constituent effects in Polish. *Natural Language and Linguistic Theory* 8, nº 3: 427-63. RUBACH, J. (1995) Representations and the Organization of Rules in Slavic Phonology. In: J. GOLDSMITH (ed.): 848-66. SELKIRK, E. (1984) On the major class features and syllable theory. In: M. ARONOFF & R. OEHRL (eds.) *Language Sound Structure*. Cambridge, Mass.: MIT Press. VIGÁRIO, M. & I. FALÉ (1993) A sílaba do Português Fundamental: uma descrição e algumas considerações de ordem teórica. *Actas do 9º Encontro da Associação Portuguesa de Lingüística*. Coimbra (1994): 465-78.
ENROLLMENT(S) COUNCIL OF THE DISTRICT OF COLUMBIA NOTICE D.C. LAW 8-234 "District of Columbia Consumer Protection Procedures Act Amendment Act of 1990". Pursuant to Section 412 of the District of Columbia Self-Government and Governmental Reorganization Act, P. L. 93-198, "the Act", the Council of the District of Columbia adopted Bill No. 8-111 on first and second readings, December 4, 1990, and December 18, 1990, respectively. Following the signature of the Mayor on December 27, 1990, this legislation was assigned Act No. 8-317, published in the January 11, 1991, edition of the D.C. Register, (Vol. 38 page 296) and transmitted to Congress on January 15, 1991 for a 30-day review, in accordance with Section 602(c)(1) of the Act. The Council of the District of Columbia hereby gives notice that the 30-day Congressional Review Period has expired, and therefore, cites this enactment as D.C. Law 8-234, effective March 8, 1991. JOHN A. WILSON Chairman of the Council Dates Counted During the 30-day Congressional Review Period: January 15, 16, 17, 18, 22, 23, 24, 25, 28, 29, 30, 31 February 1, 4, 5, 6, 7, 19, 20, 21, 22, 25, 26, 27, 28 March 1, 4, 5, 6, 7 AN ACT D.C. ACT 8-317 IN THE COUNCIL OF THE DISTRICT OF COLUMBIA DEC. 27, 1990 To amend title 28 of the District of Columbia Code to make the sale or lease of real estate a consumer transaction; to give the Department of Consumer and Regulatory Affairs substantive rulemaking authority in the area of unlawful trade practices; to toll the statute of limitations for filing a civil action in the District of Columbia Superior Court if the civil action involves a matter before the Department of Consumer and Regulatory Affairs; to provide the Corporation Counsel the ability to seek damages and injunctive relief for the consumer; and to make technical amendments to reflect the organizational structure of the Department of Consumer and Regulatory Affairs. BE IT ENACTED BY THE COUNCIL OF THE DISTRICT OF COLUMBIA, That this act may be cited as the "District of Columbia Consumer Protection Procedures Act Amendment Act of 1990". Sec. 2. Title 28 of the District of Columbia Code is amended as follows: (a) The table of contents is amended by adding the following phrase to read as follows: "28-3909. Restraining prohibited acts." (b) Section 28-3901(a) is amended as follows: (1) Paragraph (4) is amended by striking the word "Office" and inserting the word "Department" in its place. (2) Paragraph (7) is amended by adding the phrase "real estate transactions," after the phrase "business opportunities". (3) Paragraph (8) is amended to read as follows: "(8) "Department" means the Department of Consumer and Regulatory Affairs;" (4) The following new paragraphs are added to read as follows: "(9) "Director" means the Director of the Department of Consumer and Regulatory Affairs;" "(10) "Chief of the Office of Compliance" means the senior administrative officer of the Department's Office of Compliance who is delegated the responsibility of carrying out certain duties specified under section 28-3905; "(11) "Office of Adjudication" means the Department's Office of Adjudication which is responsible for carrying out certain duties specified under section 28-3905; "(12) "Office of Consumer Education and Information" means the Department's Office of Consumer Education and Information which is responsible for carrying out the statutory requirements set forth in D.C. Code, section 28-3906; and "(13) "Committee" means the Advisory Committee on Consumer Protection which is responsible for carrying out the statutory requirements set forth in section 28-3907." (c) Sections 28-3902 is amended as follows: (1) By striking the phrase "Office of Consumer Protection" wherever it appears and inserting the word "Department" in its place. (2) By striking the word "Office" wherever it appears and inserting the word "Department" in its place. (3) By amending subsection (a) to read as follows: "(a) The Department of Consumer and Regulatory Affairs shall be the principal consumer protection agency of the District of Columbia government and shall carry out the purposes of this chapter." (4) By repealing subsection (b). (5) By amending subsection (c) to read as follows: "(c) The Director of the Department of Consumer and Regulatory Affairs shall exercise the powers set forth in section 28-3905 through the Office of Compliance, and shall appoint a Chief of the Office of Compliance from among active members of the unified District of Columbia Bar. The Chief of the Office of Compliance may carry out investigative, conciliatory, and other duties assigned by the Director." (6) By repealing subsection (d). (7) By amending subsection (e) by striking the phrase "Section of Hearings" and inserting the phrase "Office of Adjudication" in its place. (8) By repealing subsections (f) and (g). (d) Section 28-3903 is amended as follows: (1) By striking the word "Office" wherever it appears and inserting the word "Department" in its place. (2) By amending subsection (a) by adding a new paragraph (15) to read as follows: "(15) issue rules that interpret, define, state general policy, or prescribe requirements to prevent unfair, deceptive, and unlawful trade practices as set forth in section 28-3904." (3) By amending subsection (c)(2)(C) by striking the phrase "practitioners of the healing arts". (e) Section 28-3904 is amended as follows: (1) By amending subsection (aa) by striking the phrase "sections 5, 6, 7, and 8 of the Employment Services Licensing and Regulation Act of 1984." and inserting the phrase "sections 36-1004, 36-1005, 36-1006, and 36-1007;" in its place. (2) By amending the 2nd subsection (aa) as follows: (A) By redesignating the subsection as subsection (bb); and (B) By striking the period and inserting a semicolon in its place. (3) By amending subsection (cc) by striking the period and inserting the phrase "; and" in its place. (4) By adding a new subsection (dd) to read as follows: "(dd) violate any provision of title 16 of the District of Columbia Municipal Regulations." (f) Section 28-3905 is amended as follows: (1) By striking the word "Office" wherever it appears and inserting the word "Department" in its place. (2) By striking the phrase "Section of Hearings" wherever it appears and inserting the phrase "Office of Adjudication" in its place. (3) By striking the phrase "General Counsel" wherever it appears and inserting the phrase "Chief of the Office of Compliance" in its place. (4) By amending subsection (a) by adding the following sentence at the end to read as follows: "The filing of a complaint with the Department shall toll the periods for limitation of time for bringing an action as set out in section 12-301 until the complaint has been resolved through an administrative order, consent decree, or dismissal in accordance with section 28-3905 or until opportunity to arbitrate has been provided in Chapter 13 of Title 40." (5) By amending subsection (e) to read as follows: "(e) The Director shall attempt to settle, in accordance with subsection (h) of this section, each case for which reasonable grounds are found in accordance with subsection (d) of this section. Within 180 days of the Director's determination as to whether the complaint is within the Department's jurisdiction, in accordance with subsection (d) of this section, the Director shall, absent good cause for delay as determined by the Office of Adjudication: "(1) effect a consent decree; "(2) dismiss the case in accordance with paragraph (2) of this subsection; "(3) through the Chief of the Office of Compliance present to the Office of Adjudication, with copies to all parties, a brief and plan statement of each trade practice that occurred in violation of District law, the law the trade practice violates, and the relief sought from the Office of Adjudication for violation; or "(4) notify all parties of another action taken, with the reasons therefor stated in detail and supported by fact. Reasons may include: "(A) any reason listed in subsections (d)(1) through (d)(6) of this section; and "(B) that the presentation of a charge to the Office of Adjudication would not serve the purposes of this chapter. "(5) Repealed." (6) By amending subsection (f) by striking the number "30" and inserting the number "15" in its place. (7) By amending subsection (g)(5) by adding the phrase, "including punitive damages, treble damages, or reasonable attorney's fees," after the word "remedies". (8) By amending subsection (h)(1)(a) by striking the phrase "paragraphs (2) through (5) of subsection (g)" and inserting the phrase "subsection (g)(2) through (g)(6)" in its place. (9) By amending subsection (i)(3) as follows: (A) Subparagraph (B) is amended by adding the following language at the end to read as follows: "The Court may set aside the final order if the Court determines that the Department of Consumer and Regulatory Affairs lacked jurisdiction over the respondent or that the complaint was frivolous. If, after considering an application to set aside an order of the Department of Consumer and Regulatory Affairs, the Court determines that the application was frivolous or that the Department of Consumer and Regulatory Affairs lacked jurisdiction, the Court shall award reasonable attorney's fees." (B) A new subparagraph (C) is added to read as follows: "(C) Application to the Court to enforce an order shall be made at no cost to the District of Columbia or the complainant." (g) Section 28-3906 is amended as follows: (1) Subsection (a) is amended as follows: (A) By striking the phrase "Section of Consumer Education" and inserting the phrase "Office of Consumer Education and Information" in its place. (B) By amending paragraph (3) by striking the word "Department" and inserting the word "Office" in its place. (2) Subsection (b) is amended by striking the phrase "Section Chief" and inserting the phrase "Chief of the Office of Consumer Education and Information" in its place. (h) A new section 28-3909 is added to read as follows: "28-3909. Restraining prohibited acts. Notwithstanding any provision of law to the contrary, if the Corporation Counsel has reason to believe that any person is using or intends to use any method, act, or practice in violation of section 28-3803, 28-3805, 28-3807, 28-3810, 28-3811, 28-3812, 28-3814, 28-3817, 28-3818, 28-3819, or 28-3904, and if it is in the public interest, the Corporation Counsel, in the name of the District of Columbia, may petition the Superior Court of the District of Columbia to issue a temporary or permanent injunction against the use of the method, act, or practice. In any action under this section, the Corporation Counsel shall not be required to prove damages and the injunction shall be issued without bond. The Corporation Counsel, on behalf of any identifiable person, may recover restitution for property lost or damages suffered as a consequence of the unlawful act or practice." Sec. 3. This act shall take effect after a 30-day period of Congressional review following approval by the Mayor (or in the event of veto by the Mayor, action by the Council of the District of Columbia to override the veto) as provided in section 602(c)(1) of the District of Columbia Self-Government and Governmental Reorganization Act, approved December 24, 1973 (87 Stat. 813; D.C. Code, sec. 1-233(c)(1)), and publication in either the District of Columbia Register, the District of Columbia. Statutes-at-Large, or the District of Columbia Municipal Regulations. [Signature] Chairman Council of the District of Columbia [Signature] Mayor District of Columbia APPROVED: December 27, 1990 COUNCIL OF THE DISTRICT OF COLUMBIA Council Period Eight RECORD OF OFFICIAL COUNCIL VOTE DOCKET NO: B8-111 ☒ Item on Consent Calendar ☒ ACTION & DATE: Adopted First Reading, 12-04-90 ☒ VOICE VOTE: Approved Recorded vote on request Absent: all present ☐ ROLL CALL VOTE: — RESULT | COUNCIL MEMBER | AYE | NAY | N.V. | A.B. | |----------------|-----|-----|------|------| | CHMN. CLARKE | | | | | | CRAWFORD | | | | | | JARVIS | | | | | | KANE | | | | | | LIGHTFOOT | | | | | | MASON | | | | | | NATHANSON | | | | | | RAY | | | | | | ROLARK | | | | | | SMITH, JR. | | | | | | THOMAS, SR. | | | | | | WILSON | | | | | | WINTER | | | | | X — Indicates Vote A.B. — Absent N.V. — Present, not voting CERTIFICATION RECORD Russell G. Smith Secretary to the Council 21 December 1990 ☒ Item on Consent Calendar ☒ ACTION & DATE: Adopted Final Reading, 12-18-90 ☒ VOICE VOTE: Approved Recorded vote on request Absent: Wilson ☐ ROLL CALL VOTE: — RESULT | COUNCIL MEMBER | AYE | NAY | N.V. | A.B. | |----------------|-----|-----|------|------| | CHMN. CLARKE | | | | | | CRAWFORD | | | | | | JARVIS | | | | | | KANE | | | | | | LIGHTFOOT | | | | | | MASON | | | | | | NATHANSON | | | | | | RAY | | | | | | ROLARK | | | | | | SMITH, JR. | | | | | | THOMAS, SR. | | | | | | WILSON | | | | | | WINTER | | | | | X — Indicates Vote A.B. — Absent N.V. — Present, not voting CERTIFICATION RECORD Russell G. Smith Secretary to the Council 21 December 1990 ☐ Item on Consent Calendar ☐ ACTION & DATE: ☐ VOICE VOTE: Recorded vote on request Absent: ☐ ROLL CALL VOTE: — RESULT | COUNCIL MEMBER | AYE | NAY | N.V. | A.B. | |----------------|-----|-----|------|------| | CHMN. CLARKE | | | | | | CRAWFORD | | | | | | JARVIS | | | | | | KANE | | | | | | LIGHTFOOT | | | | | | MASON | | | | | | NATHANSON | | | | | | RAY | | | | | | ROLARK | | | | | | SMITH, JR. | | | | | | THOMAS, SR. | | | | | | WILSON | | | | | | WINTER | | | | | X — Indicates Vote A.B. — Absent N.V. — Present, not voting CERTIFICATION RECORD _________________________ _______________________ Secretary to the Council Date
APPLICATION TO HOST A TOURNAMENT OR GAMES Name of Tournament or Games: Virginia Shamrock Super Cup Hosting Organization: Norfolk United Soccer Club Website: http://www.newportnewsfc.com/tournaments.htm Designate Official of Hosting Organization: Ian Holder Title: President Address: 7316 Colony Point Road, Norfolk VA 23505 Telephone: 757-639-6859 (H) Email: firstname.lastname@example.org State Association or Affiliate: Virginia Youth Soccer Association Location of Tournament or Games: Yorktown Date(s) of Tournament or Games: 3/19/2016 - 3/20/2016 Team Entry Deadline: 2/29/2016 Estimated Number of Teams: 200 Address of Field (Tournament Headquarters): Courtyard by Marriott 470 McLaura Circle, Williamsburgh VA 29183 Tournament or Games Director or Contact Person: Susan Smith Address: 7316 Colony Point Road, Norfolk VA 23505 Telephone: 757-639-6859 (H) 757-639-6859 (W) 757-639-6859 (FAX) Email: email@example.com | Age Groups Accepted | Type(s) of Team Accepted | Gender | Roster Size | # Guest Players Allowed | Length of Games | Ball Size | Awards | Min # of Games | Entry Fee | Bond | |---------------------|--------------------------|--------|-------------|------------------------|----------------|-----------|--------|----------------|----------|------| | U09 | J | F M | 12 | 5 | 50 | 4 | 1st & 2nd | 3 | 400 | | | U10 | J | F M | 12 | 5 | 50 | 4 | 1st & 2nd | 3 | 400 | | | U118 | J | F M | 12 | 5 | 60 | 4 | 1st & 2nd | 3 | 400 | | | U128 | J | F M | 14 | 5 | 60 | 4 | 1st & 2nd | 3 | 500 | | | U13 | J | F M | 18 | 5 | 60 | 5 | 1st & 2nd | 3 | 625 | | | U14 | J | F M | 22 | 5 | 60 | 5 | 1st & 2nd | 3 | 625 | | | U15 | J | F M | 22 | 5 | 70 | 5 | 1st & 2nd | 3 | 625 | | | U16 | J | F M | 22 | 5 | 70 | 5 | 1st & 2nd | 3 | 625 | | | U17 | J | F M | 22 | 5 | 70 | 5 | 1st & 2nd | 3 | 625 | | | U18 | J | F M | 22 | 5 | 70 | 5 | 1st & 2nd | 3 | 625 | | | U19 | J | F M | 22 | 5 | 70 | 5 | 1st & 2nd | 3 | 625 | | Teams will be invited from: All US Youth Soccer State Associations, Other US Soccer Member Organizations (List Below) **Foreign Teams/State Associations/Affiliates/Other US Soccer Members: US Soccer, AYSO Designate Official of Hosting Organization: Ian Holder Date: 9/14/15 APPROVAL (For Official Use Only) STATE ASSOCIATION: VYSA Date: 9/28/15 OR AFFILIATE: By: Title: Executive Dir. In consideration of permission being granted to **Hampton Roads Strikers**, to hold a tournament or games at **Yorktown**, on the dates of **3/19/2016** through **3/20/2016**, we agree to the following conditions: 1. **ABIDE BY RULES**: We shall abide by all statements made in our approved US Youth Soccer Application to Host A Tournament or Games, in our tournament invitation, in our tournament rules, in the US Youth Soccer Travel and Tournament Policy and in this US Youth Soccer Tournament or Games Hosting Agreement. We agree that all decisions regarding acceptance of teams into a tournament shall be fairly and impartially made and shall not be based upon race, creed, color or national origin. 2. **INVITATIONS**: The tournament or games approval form shall accompany all tournament or games invitations distributed us. 3. **PROCURING LIABILITY INSURANCE**: We have procured liability insurance coverage for the tournament or games with limits of not less than $1,000,000/$2,000,000 which names the State Association or Affiliate with which the Hosting Organization is a member, US Youth Soccer and their officers and directors as additional insureds. A copy of the certificate of insurance is attached issued by [BOLLINGER, INC]. 4. **REQUIRING MEDICAL AUTHORIZATIONS**: We shall require all teams participating in the tournament or games to provide medical authorizations for each player in a form adequate for use at the site of the tournament or games. These authorizations shall be presented to the Hosting Organization at registration and kept at the field available for use by the team. 5. **ADVANCE PUBLICATION OF RULES**: We agree that our tournament or games rules shall be included with the invitation sent to each team and shall, again, be published to all teams accepted prior to the start of the tournament/games. 6. **CREDENTIALS CHECKS**: We agree that we shall conduct credentials checks [at registration] to ensure that all players are registered with US Youth Soccer or US Soccer, properly rostered with their team, and participating in accordance with representations set forth on the US Youth Soccer Application to Host a Tournament or Games. 7. **USE OF US SOCCER REGISTERED REFEREES**: We agree that we shall, in accordance with US Soccer Bylaw 532, use for all games only US Soccer registered referees who are in good standing (unless US Soccer has granted a waiver to allow the use of authorized referees from another country), and shall use a one- or 3-referee system. We intend to use a 3-referee system for the following age groups: [12,13,14,15,16,17,18,19]. There will be an adequate number of US Soccer registered referees available in the area during the tournament or game dates to cover the scheduled games. We have selected the following assignor to assign referees for the tournament or games *(NOTE: Effective, September 1, 2001, ONLY US Soccer certified assignors may be used.)*: **Antonio Araiza** E-mail: firstname.lastname@example.org Telephone: 757-810-4250 8. **USE OF FIELD MARSHALS - FIELD INSPECTION**: We agree that during the tournament or games each game field will have a field marshal assigned to it at all times; that the field marshal will be readily available and identifiable; that prior to the commencement of every game the field marshal will inspect the field to be sure that it is free from objects or conditions that may cause injury. If any condition exists which cannot be immediately corrected, it shall be brought to the attention of the referee and the tournament /games director. The Director of Field Marshals is: **Susan Smith** 9. **USE OF SPECTATOR LINES**: We agree to take appropriate steps including, where feasible, the use of spectator lines on each field to keep the spectators off the touch line. 10. **PROVISION OF ADEQUATE TOURNAMENT COMMUNICATION**: We agree to provide adequate communication by means of [Cell Phone] between the game fields and the tournament/games headquarters. The Tournament Communications Director is: Susan Smith 11. **AVAILABILITY OF POLICE AND RESCUE SERVICE**: We have notified the local police, ambulance, and emergency rescue services of the date of the tournament or games and the times and fields which will be used for games, and have been advised by them that they will be available to render assistance if needed. 12. **TOURNAMENT OR GAME RULES ? BEHAVIOR**: We agree that our tournament or game rules contain provisions ensuring that the behavior of teams, players, coaches, and spectators is appropriately controlled, including specific provisions that? a. spell out the disciplinary measures to be imposed for the issuance of red and yellow cards or other improper conduct; b. indicate what procedures will be followed regarding protests and appeals; c. indicate that all disciplinary measures imposed by hosting organizations shall be limited to placing restrictions upon an individual's group participation in the tournament/games; d. record the issuance of all red and yellow cards and other matters involving the conduct of a team, its players, coaches, and supporters and also report them immediately to the home State Association and the home club/league of the team; and e. state that the home State Association or Affiliate and the home club or league shall, except in the case of referee assault or abuse, have the responsibility for imposing, should circumstances warrant, additional penalties within their respective jurisdictions with regard to any matters arising from the tournament or games. 13. **TOURNAMENT CANCELLATION**: We agree that our tournament or game rules shall state what refunds, if any, shall be made to participating teams if all or a portion of the tournament or games is cancelled by the hosting organization for any reason. 14. **POST TOURNAMENT OR GAMES REPORT**: We agree that we shall file a Post Tournament or Games Report with the State Association or Affiliate granting us permission to host this tournament or games within 30 days after the conclusion of the tournament or games. We understand that failure to file the report shall preclude the tournament/games host from receiving approval for any tournament/games for the following seasonal years until the report is filed. The Post Tournament or Games Report shall include the following information: a. the number of teams participating in each age group (boys and girls); b. if a champion is determined, the name of the champion for each group; c. the number of teams from each State Association, Affiliate, other Organization Member, or foreign country; d. if "Sportsmanship Awards" are given, the criteria for the award and to whom awards were given; e. the number of fields used for the tournament/games; f. the name of the sponsor, if any; and g. the names and teams of all players issued red and yellow cards, and details of any other matters involving the improper or unsportsmanlike conduct of a team, its players, coaches or supporters. NOTE: Any incident of referee assault or referee abuse by a player, coach, manager, club official, or game official, or other incidents of a serious nature, must be reported to the alleged offender's club or league and home State Association, Affiliate, or other Organization Member immediately, but in no event later than 48 hours after an incident of referee assault or abuse. Signature: ___________________________ Date: 9/15/15 Hosting Organization President or Chief Officer Signature: ___________________________ Date: 9/15/15 Tournament or Games Director Hosting Organization Newport News FC P.O. Box 2249 Newport News, VA 23505 E-mail: Telephone: 757-753-3968 Fax: Tournament or Games Headquarters Courtyard by Marriott 470 McLaura Circle Williamsburgh, VA 29183 E-mail: Telephone: Fax: TEAM TYPES FORM The Virginia Youth Soccer Association, Inc. (VYSA) has chosen to use only one of the team types recommended in the US Youth Soccer Travel and Tournament Policy Manual. VYSA uses US Youth Soccer Team Type J, and the VYSA team types listed below. When completing an Application to Host a Tournament or Games, put J in the accepted team types box on the application form and check the appropriate type(s) of teams below that are eligible to play. When completing an Application to Travel, put J in the Type of Team box on the application form and check the team type below that best matches your team. __________________________ VIRGINIA SHAMROCK SUPER CUP NAME OF TOURNAMENT OR NAME OF LEAGUE OR CLUB HOSTING FRIENDLY GAMES __________________________ NAME OF TEAM TRAVELING TO TOURNAMENT OR FRIENDLY GAMES _____ CLUB TEAM – A travel or competitive team composed of players who are listed on the team’s VYSA league roster. J CLUB TEAM WITH GUEST PLAYERS – A travel or competitive team composed of players who are listed on the team’s VYSA league roster plus… 5 Number of guest players that can be either Club or Recreational players. Recreational players must be allowed as club guest players. J LEAGUE SELECT TEAM – The official “Select Team” of a league whose players are chosen on a league-wide basis from club teams. (NOTE: A VYSA League Select Team roster must be approved by the VYSA State or Regional Registrar for each tournament. _____ RECREATIONAL TEAM – A team that participates in a recreational, house, or intramural program for a club, league or association _____ RECREATIONAL TEAM WITH GUEST PLAYERS – A team that participates in a recreational, house or intramural program for a club, league or association plus… _____ Number of guest players that can only be Recreational players. _____ RECREATIONAL ALL-STAR TEAM – A team composed of players selected from more than one team that participates in a recreational, house, or intramural program for a club, league, or association. _____ STATE SELECT TEAM – The official “Select Team” of a State Association whose players are chosen on a statewide basis. In VYSA, these are the State Olympic Development Teams (ODP) which includes District ODP. 2016 VIRGINIA SHAMROCK SUPER CUP TOURNAMENT RULES The Virginia Super Cup will be played in accordance with FIFA Laws of the Game. Except as modified TOURNAMENT HEADQUARTERS Tournament Headquarters will be located at Courtyard by Marriott, 470 McLaura Circle, Williamsburg, VA 29183; Tournament Director: Susan Smith; 757-639-6859 TEAM ELIGIBILITY AND REGISTRATION Participation is open to U14-U19 Boys and Girls Travel Teams of 22 or fewer players (U9-10-11) 12 players and U12 teams may have no more than 14 players on their roster and U13 teams may have no more than 18 players on their roster registered with a National State Association affiliated with USYSA/USSF or national equivalent and must present a valid State or Provincial roster. Teams that are members of organizations of the United States Soccer Federation but not members of US Youth Soccer (such as AYSO, SAY or US Club Soccer) DO NOT have to have a US Youth Soccer Application to Travel form (although that team’s organization may require that the team have permission). Such a team roster does need to be provided to tournament officials, however, along with current passes from its organization and a current team roster. Roster for the year 2015-2016. Up to 5 carded guest players are permitted for U-9 through U-18. However, a team using guest players must have no more than Roster Sizes: · U9- U10 7v7 teams are limited to 12 players identified at Registration. · U11 8v8 teams are limited to 14 players identified at Registration · U12 8v8 teams are limited to 14 players identified at Registration · U13 teams are limited to 18 players identified at Registration · U14-U18 teams can have 22 players on their rosters turned in at Registration; however teams must identify 18 players before each game with the referees. Only 18 players in each game are allowed to play REGISTRATION Team Check-in / Registration Requirements US Teams: • Original 2015/2016 Player Passes; either USYS State Association or US Club Soccer passes • Original 2015/2016 Official Roster; either USYS State Association or US Club Soccer • One Copy of Official Roster • Guest Players identified: Written on front or back of COPIES of Rosters • Player Passes for Guest Player • Individual Player Medical Release Forms • Permission to Travel Paper Work: Not Required for US Club Soccer Teams. USYS State Association teams from Region I do not need Permission to Travel Paper Work. USYS State Association teams from all other Regions need Permission to Travel Paper Work. All registration will take place online. March 10-13, 2016. WITHDRAWAL POLICY Applied Teams Withdraw before Acceptance: Teams withdrawing from the event before acceptances are issued via e-mail will be issued a full tournament refund. The tournament is not responsible for any hotel or additional costs incurred by the team. Accepted Teams Withdraw after Acceptance: Teams withdrawing from the event after being accepted will not be refunded. The Tournament Committee reserves the right to combine age groups if necessary. GENERAL All tournament matches will be played in accordance with “The Laws of the Game” as issued by FIFA except as modified in the “Tournament Rules.” All decisions of the referee(s) are final and binding. There are no protests regarding the outcome of a match or sanctions. Inclement Weather and Tournament Cancellation Policy: Regardless of weather conditions, coaches and their teams must appear at their respective field site, ready to play as scheduled. Failure to appear will result in forfeiture of the match. Only the Tournament Director may cancel or postpone a match. Referees may suspend a match only in case of severe weather, in his/her discretion, Tournament Director(s) may cancel any and all games. In case of severe weather that occurs after the beginning of play, the Tournament Director may reduce the length of the match and may discontinue or cancel the game. Should a match be terminated due to weather conditions after 20 minutes of play, the match will be considered official and the score at the time will stand. If a match is terminated prior to 20 minutes of play, every attempt will be made to complete the match. However, if necessary other means, determined by the tournament committee, may be used to determine a winner. Referees and field marshals will not consider beginning or continuing matches when a lightning storm exists. If games are canceled once the tournament has commenced, there will be no reimbursement. There will no refunds given if the tournament is cancelled b/c of weather. Under no circumstances whatsoever will the Tournament Committee, any official sponsor, or VYSA be responsible for expenses (including tournament entry fee) incurred by any team. This includes a situation where the tournament or any game(s) is/are cancelled in whole or part. TOURNAMENT CANCELLATION POLICY The tournament will use its best efforts to schedule a minimum of three (3) games for each team. However, in the event that at least three (3) games cannot be scheduled or played, there shall be no liability upon the tournament and no refund is guaranteed. It is possible because of weather or other issues (such as a late drop out by a team) that at least three (3) games cannot be played. Home/Team Field Positions: The Home team is listed first in the schedule. Each team’s players will take a position on one side of the field opposite the spectators. In the event a uniform conflict occurs the HOME team will change their colors. We request that all sideline trash on the player side and spectator side must be picked up and removed. The Home team is responsible to turn in the game report within ten minutes of the match to the main tent. INCLEMENT WEATHER In the event of inclement weather forcing play to be halted and preventing the match from completing during the scheduled time, the score shall stand if at least one-half of the game has been played. In the case of matches halted prior to the completion of at least one-half, the Tournament Director reserves the right to declare the match final thereby counting the score at the time the match was halted. Regardless of weather conditions, teams and coaches must be at the game site and ready to play at the scheduled time. Failure to appear will result in forfeiture of the game. As stated in the application, no refunds will occur as a result of inclement weather. REFEREES All referees will be USSF certified. A one-man system will be used for all U-10 and U-11 games and a three-man system will be used for U-12 and above in addition to all semi-final and final matches. SIDELINES Coaches and players will share the same side of the field as designated by the field marshal. All spectators will take up a position on the opposite side of the field during the time that the match is in progress. Coaches, players and spectators for all participating teams must remain on their respective sides of the field during the time of the match. Behavior of spectators associated with the team remains the responsibility of the coach. The referee and field marshal are authorized to remove any spectator whose behavior, in their opinion, interferes with the play of the game. No coaching shall be permitted within 18 yards of the goal line or behind the goal line. Coaches, players or spectators are not permitted to stand behind the goal line at anytime during the time the match is in progress. Alcoholic beverages will not be permitted at any tournament side; violators will be subject to criminal prosecution. **START OF PLAY** Any U-13 through U-19 team(s) that cannot field seven (7) players at the scheduled start time of a match shall forfeit the match. Any U09 through U12 team, which cannot field five (5) players at the scheduled start time of a match, shall forfeit the match. **Any team forfeiting the match shall be declared the loser by a score of three (3) goals to none (0).** If there is **no** referee present within ten minutes of the scheduled start time, the match shall be rescheduled unless both teams’ coaches agree to proceed. If the match proceeds, the score shall stand as played. If there is only one referee present for a preliminary match (U-12 and up) at the scheduled start time, the referee present shall commence the match using volunteer line’s person. Should the second/third referee arrive at the field, he should enter the match at the appropriate break in the play and volunteer linesmen shall be relieved of their duties. It is the duty of the coaches to ensure that players report to the field **15 minutes** prior to the start time of each match for possible verification of rosters and players passes. Player passes and roster must be present at the field for the duration of every game. The home team is responsible for providing the game ball, unless the ball is provided by the tournament. **PROTESTS** NO PROTESTS WILL BE PERMITTED. **PLAYER EQUIPMENT** Shoes must meet FIFA specification. All players must wear shin-guards. Padded casts will be allowed ONLY under these conditions: 1. They are well padded in foam or other protective material AND 2. The player with the cast does not attempt to use it to an advantage or to harm other players AND 3. The referee approves the cast. 4. Such approval will not be unreasonably withheld. In the event of a uniform conflict, the HOME team must change (listed first on schedule). **ALL SIZE** | Age Group | Size | |-----------------|-------| | U-12 and younger| Size 4| | U-13 and older | Size 5| **DURATION OF MATCHES AND FORMAT** **ALL MATCHES WILL START ON TIME.** The duration of halves is listed below. The interval between halves shall be five (5) minutes. The referee is the official timekeeper of the match and reserves the right to shorten the interval between halves if necessary. During the preliminary matches there will be no injury time allowed. All age groups will play with goalkeepers. Five guest players per team will be permitted for all age groups. US Club Soccer teams may only take guest players registered under US Club Soccer, and USYSA teams may only take guest players registered with properly stamped USYSA player pass cards. **Players can only play for 1 team throughout the tournament.** A. **Ball Size:** - Size 5 U19 - U13 - Size 4 U12 - U10 B. **Game Duration:** - U15, U16 U17, U18, U19 Halftime - 5 min 70 min. U14, U13 Halftime - 5 min 60 min. U11 & U12 (8v8) Halftime - 5 min 60 min. U09 & U10 (7v7) Halftime - 5 min 50 min. **SUBSTITUTIONS** Either team will have unlimited substitutions subject to the following conditions and upon approval of the referee: 1. Prior to a throw in by the team in possession 2. Prior to a goal kick by either team 3. Anytime approval is granted by the referee. (As in the case of injury, for injured player only.) **Note:** When this occurs, opponent may substitute a like number. 4. For a player who has been cautioned. **Note:** When this occurs, opponent may substitute a like number. **DIVISION STANDINGS** Division standings will be decided by the following point system: | | | |---|---| | 1 | Win or forfeit | 3 points | | 2 | Tie | 1 point | | 3 | Loss | 0 points | **TOURNAMENT TIE BREAKERS** *(Determination of wild cards first and second place winners)* Total points ties within divisions and/or brackets will be broken by the following tiebreakers in order: 1. Head to head competition during the tournament of the two tied teams. *(Disregard if more than two teams are tied – Never revert back to this tiebreaker if more than two teams are tied).* 2. Least amount of goals allowed. 3. Team with the highest goal differential. A maximum differential of three (3) goals **per match** will be counted. Positive only. 4. Most wins. 5. Total goals scored. 6. If still tied after steps 1 through 5, both teams will proceed to an available field, at a time and place directed by the Field Marshall, and take penalty kicks in accordance with FIFA tie breaker rules. **TOURNAMENT OVERTIME** All preliminary and consolation matches will remain a tie. In **semi-final** and **final** matches, if regulation play ends in a tie score, the following steps will be taken: 1. Both teams will be given a five-minute rest period before overtime periods begin. 2. The teams will play 2- five-minute periods to completion, with substitutions allowed, and changing goals after each five-minute period, with a one-minute break between periods. 3. If still tied after two overtime periods, the tie will be broken by penalty kicks in accordance with FIFA tiebreaker rules. 1. Only the players on the field at the end of regulation will be eligible to participate in penalty kick procedure. 2. The players from each team must stay on the field of play and will meet at the center at the end of the game. 3. Captains will meet for the coin toss to decide who kicks first. 4. Referee decides which goal will be used. **WARNINGS & EJECTIONS** If a player is ejected from a match – red card by the referee – the player must sit out the remainder of that match, plus his team’s next tournament match. No substitution will be made for the ejection of a player during the match that the red card was issued. Two yellow cards to the same player in the same match equal a red card and will result in immediate ejection from that match. Coaches in receipt of a red or two yellow cards in the same match are subject to the same penalties as outlined above for players. A coach ejected from a match will not be allowed in the vicinity of the field for the remainder of that match and for his team’s next match. Coaches are responsible for their players, parents and guests on the sideline. No team or club official may enter the field of play regardless of the circumstances unless that person has been given permission to enter the field of play by the referee. Because of the seriousness of such instances, red cards issued after the end of regulation play or as a result of physical assault are subject to review by the Tournament Committee and a more strenuous penalty, which is not subject to appeal, may be employed. **PROBLEMS/QUESTIONS** In the event of a problem, or if a team has a question about the tournament, they should first check with the Site Coordinator at their respective field location. The Site Coordinator will check periodically with the Tournament Headquarters and can handle most problems. The Tournament Director will ultimately decide all issues not resolved on the field locations. All decisions of the Tournament Director are final. **REGION 1 POLICY REGARDING APPLICATION TO HOST A TOURNAMENT** Teams that are members of organizations of the United States Soccer Federation but not members of US Youth Soccer (such as AYSO, SAY, US Club Soccer or Super Y Leagues) DO NOT have to have a US Youth Soccer Application To Travel form (although that team's organization may require that the team have permission). An approved team roster does need to be provided to tournament officials, along with current player passes from its organization. Region 1 has established the following policy concerning permission to travel when attending US Youth Soccer sanctioned tournaments in Region 1. The purpose of this policy is to make it as simple as possible for the US Youth Soccer Region 1 teams to travel to tournaments within Region 1. **NATIONAL STATE ASSOCIATION IN REGION 1** Connecticut Jr. Soccer Assn. Delaware Youth Soccer Assn. Eastern New York Youth Soccer Assn. Eastern Pennsylvania Youth Soccer Assn. Soccer Maine Maryland Youth Soccer Assn. Massachusetts Youth Soccer Assn. New Hampshire Soccer Assn. New Jersey Youth Soccer Assn. New York State West Youth Soccer Assn. Pennsylvania West State Soccer Assn. Soccer Rhode Island Vermont Soccer Assn. Virginia Youth Soccer Assn. West Virginia Soccer Assn. **TEAM CONTACT** At registration, each team is required to provide local contact information, such as the name of the hotel where the team is staying (if applicable) and cell phone numbers for the coach and team manager. These contacts must be available at all times during the tournament. Also at registration, you will sign for a copy of the official tournament schedule for your team. Please review this schedule as changes may have occurred since the schedule was first released. **DISCLAIMER** No requests for application fee refunds after acceptance will be considered. **Important Notes:** 1. Have all player passes, official roster, and medical release forms with you at the fields during the tournament. Although only verified at check-in the tournament director reserves the right to check credentials during the event. US Club Soccer teams must have player cards and official roster and be in good standing. 2. The tournament is dedicated to the development of all the players participating, good sportsmanship, and the “good of the game”. The Tournament Director may suspend, without recourse or appeal, any players, coaches, or spectators who demonstrate anything less. **Advancement in tournament play:** a. Division with 4 teams only will play each other and winner and finalist will be determined by points - no final match. b. Division with 5 teams only will play each other and winner and finalist will be determined by points - no final match. c. Division with six teams: Each team will play two preliminary matches. At the conclusion of all preliminary round matches, all teams will be ranked in their respective divisions with the top two teams in each division advancing to semi-final matches. The third team in each bracket will play a consolation match. d. Divisions with eight teams (two brackets of four). Each team will play three preliminary round matches. At the conclusion of preliminary round matches, teams will be ranked in their respective brackets with the top two teams advancing to a final match. Thank you for your recent Tournament Application for the Virginia Shamrock Super Cup. Your tournament has been APPROVED and is now listed on the VYSA Tournament website at VYSA.com. Please take a moment to review it. If you notice anything listed incorrectly or have a tournament link you would like us to link for you, please email Melissa Graham at email@example.com. **IMPORTANT CHANGES** All Applications to Host MUST be posted on your tournament website. IMPORTANT - VYSA DOES ALLOW on-line tournament check-in. You will need to collect the rosters, player passes, medical releases, and permission to travel if required, but all can be done online. ALL tournaments must post their sanctioning document (Application to Host) on their website. ALL personnel working with a tournament MUST complete the VYSA Kidsafe application on-line. All members should select “Tournament Personnel” for your club when entering the online registration system, NOT VYSA. If they have already completed Kidsafe this year, they must log back into it, and select ADD A POSITION. They may then add the new position to their profile. You have sanctioned your tournament as: _____ Restricted Tournament – Only teams affiliated with US Youth Soccer may attend. ___ Unrestricted Tournament – Teams affiliated with US Youth Soccer and the US Soccer Federation may attend. (Examples: US Club, AYSO, SAY) If you have any changes to the Tournament Dates, Age Groups, etc. please make sure you notify the State Office of those changes, as they have to be re-approved. If the tournament is CANCELLED, please contact us immediately. **Post Tournament Report** Once your tournament is completed, you need to submit a tournament report to the VYSA office within 30 days that includes: - A list of the winners and runner-ups - A list of red and yellow cards issued at the tournament (to whom and reason) - The total number of teams that attended the tournament and what state they’re from - How many fields were used during the tournament - Where the fields were located (the name of a soccer complex, name of schools, etc.) - Any other reports of serious injuries or violence that occurred - Copy of tournament program, patch, pin, etc *Future tournaments will not be sanctioned until outstanding reports are received.* VYSA Sanctioned Tournaments are NOT permitted to use the information gathered during a tournament or any information that is on a team’s roster submitted to the tournament for ANY purpose other than the team registration at the tournament. *No matter what any vendor may tell you, this is a strict violation of VYSA rules and can result in the tournament not being sanctioned by VYSA in the future.* Thank you, Melissa Graham VYSA Manager of Member Services IMPORTANT TOURNAMENT CHECK-IN INFORMATION 1. All player passes must match the affiliation of the roster. For example: If the team roster is US Youth Soccer, then ALL players on that team must have US Youth Passes. A player may not guest play on a US Youth roster with a US Club player pass, and vice versa. 2. All player passes MUST be signed by a registrar (for travel passes) or club designee (for recreational passes) 3. Every player guest playing from one state with another must have an approved US Youth Interstate Permission Form signed by both state associations 4. Teams affiliated with AAU are not members of US Youth Soccer or US Soccer Federation and may NOT participate in your tournaments. 5. Attached is a copy of a Virginia Recreational Player Card and this is the ONLY recreational card that should be accepted for Virginia teams and other states will have something similar. If you have questions please call the State office at 540-693-1430 for assistance.
THE FOWLER 'JUNIOR' CALCULATOR and CIRCULAR SLIDE RULE. This combined Calculator and Slide Rule has been put on the market with a view to enabling the Junior Student or Engineer to acquire at a reasonable price an instrument which, in one form or another, has now become almost an essential in industry. It is unique in the sense that it combines in one instrument a calculator of the single dial type, with a Slide Rule in circular form. It is thus possible for the user to become proficient in the operation of either. It consists as will be seen of a double-faced white disc carrying printed dials which can be revolved with the finger or thumb by means of serrations around its periphery. On one side is printed a replica of our well-known "Universal" Calculator dial, and over this is a transparent disc carrying an indicating line or cursor. This, too, is revolved, independently of the dial, by means of serrations around its edge. A fixed transparent disc on which is engraved a red indicating line or datum completes this side of the instrument. On the opposite side of the disc is printed a "circular slide rule," which functions in precisely the same way as the ordinary straight rule. A revolving cursor is also fitted here, but no datum, as one is not necessary. Each side possesses its own special advantages and the choice of which one to use is left to the manipulator. For example, by means of the Calculator side a "Long Scale" may be used when it is desired to obtain an answer to a greater number of significant figures than is possible on the Slide Rule side, and vice versa, the Slide Rule side will be found advantageous when percentage and ratio problems are being dealt with. The two sides will be dealt with individually, firstly by a description of the Scales, and secondly by numerous examples worked out upon them. How To Use The "Junior" & Circ. Slide Rule THE CIRCULAR SLIDE RULE Description of Scales reading inwards from the outer circle. No. 1 and 2 marked A and B respectively are for multiplication and division, and correspond exactly to the C and D scales of the ordinary straight slide rule, B being an exact replica of A, and revolving; whilst A remains fixed. No. 3 is a Scale of Logarithms. No. 4 is a Scale of Square Roots (extending over two circles). No. 5 is a Scale of Log. sines of angles from 6° to 90°. No. 6 is a Scale of Log. tangents of angles from 6° to 45°. The two outer Scales A and B have a number of special marks, viz.: $\sqrt{a}$; $\sqrt[3]{b}$; log$_{10}^{10}$; $\pi$; $g$ (gravity English); E.H.P. (electrical horse power); $\pi^{\frac{1}{2}}$; and $gF$ (gravity French), marked upon them. All the scales and sub-divisions are logarithmic, and the distances between the figures 1, 2, 3, 4, . . . 10 gradually diminish, so does the possibility of subdivision. This explains why a long scale permits of greater accuracy of reading than a short one. Any value may be assigned to the figures of the scales, but the same value must be adhered to in the sub-division. Thus 6 may stand also for -6, -.06, -.006, 60, 600, etc., but if taken say to represent 60, the subdivisions would represent 61, 62, etc.; and proportionally all other values would be multiplied by the prime number. Final readings, as in the case of any scale have to be made by judgment when they come between the lines, and depend largely on the accuracy of the observer. It will be found however that results when they cannot be read on a line of the scale with strict accuracy, can be estimated within a fraction of 1 per cent. Multiplication.—Example: $a \times b \times c$, etc. Set arrow on B to first factor (a) on A. Opposite second factor (b) on B product (a x b) on A. (It will be noted that these two scales A and B form a complete multiplication table of which the figure on scale A, to which the arrow on B is set, is one factor.) Proceeding with this example, set cursor to the product (a x b) and turn the dial until the arrow on B coincides with the product. Read on A opposite the factor (c) on B, the product $a \times b \times c$. So on for any number of factors. When multiplying factors containing decimals the position of the decimal point in the answer is usually best determined by a rough mental calculation. But the following rule is useful: Rule.—The number of figures in the product of any two factors equals the number of figures in the two factors, minus the product falls between the two arrows to the right of A, or one less if it falls to the left of A. Division.—$m \div n$. Set divisor (n) on B opposite dividend (m) on A. Read result $m \div n$ opposite arrow on B. Fractions.—Complex fractions are but a series of multiplications and divisions. Example: $\frac{15}{68} \times \frac{9}{12} = \frac{12}{17}$ Here one may proceed by multiplying all the figures in the numerator, and dividing this result by those in the denominator, or by taking 68 dividing by 15, then multiplying by 9 and dividing by 12 and afterwards multiplying by 12 and 17 as the judgment of the operator suggests. Taking the first course— Set arrow on B opposite 68 on A. Opposite 9 on B read $68 \times 9$ on A. Without noting its value set cursor to this product. Turn arrow on B to the cursor and opposite 32 on B read product of $68 \times 9 \times 32$ on A. Set cursor to this and turn arrow on B to cursor. Then opposite 17 on B read product $68 \times 9 \times 32 \times 17$ on A. Now without troubling about its value turn cursor to this product and set the first divisor in the denominator (15) on B and the second divisor (4) on A. Then move the arrow on B will give the result of this division, but without noting it, turn the cursor to it, and then set the second divisor 12 on B to the cursor. Then on A opposite the arrow on B read the result of the division (1844). Rule.—The number of figures in the quotient equals the number of figures in the dividend, less the sum of those in the divisor if the last divisor comes between the two arrows to the right of A, or more if it comes to the left of A. This is the rule of multiplication. In the present example the last divisor falls between the two arrows to the left of A, and as the dividend contains 7 figures, and the divisor 4 figures, we add 1 to the difference between the two, i.e., $7 - 4 = 3$ and $3 + 1 = 4$ figures, that is, the quotient (1844). The above operations take less time to perform than to describe. The example has been taken in detail, for illustration, as once the operations of multiplication and division are grasped, many short cuts in manipulation will be found. **Squares and Square Roots.**—In fixing the magnitude of the square of a number remember that the square of any number between 1 and 10 lies between 10 and 100, and the square of an number between 10 and 100 lies between 100 and 10,000. The square of any number less than unity is less than the number. The square root of any number less than unity is greater than the number. $$\left(\frac{1}{10}\right)^2 = \frac{1}{100}, \quad \left(\frac{3}{10}\right)^2 = \frac{9}{100}, \quad \sqrt{64} = 8, \quad \sqrt{\frac{1}{10}} = \frac{1}{10}$$ The numbers on Scale B are the squares of those on the Square Root Scale, and figures on one are compared with those on the other by means of the cursor. The square root scale extends round two circles, but it is only one scale in reality, as will be seen by following round the numbers from 1 to 10. **Rule.**—If cursor is at an odd number of digits, read square root on smaller circle. If it has an even number of digits read root on larger circle of square root scale. **Example:** Find Square of 21·5. Set cursor over 21·5 on Square Root Scale. Read 402·25 on Scale 2. **Example:** Find Square Root of 2365. Set cursor over 2365 on Scale 2. Read Square Root 48·6 on Square Root Scale. **Cubes.**—$a^3$. Use Scales 1, 2 and 4. Set arrows on Scales 1 and 2 in line. Set (a) on Scale 4 under cursor. Read area of scale 2 under cursor. Set arrow of Scale 2 under cursor. Read $a^3$ on Scale 1 opposite (a) on Scale 2. **Cube Roots.** $\sqrt[3]{a}$. Use Scales 1, 2 and 4. Set cursor on Scale 1. Turn dial till number on Scale 4, under the cursor, is the same as that on Scale 1 opposite arrow on Scale 2. This number is the cube root of (a). **Circumference of a Circle.**—Use Scales 1 and 2. Set cursor on Scale 2 opposite diameter on Scale 1. Read circumference on Scale 1 opposite $\pi$ on Scale 2. **Area of a Circle.**—Use Scales 1, 2 and 4. Set arrows of Scale 1 and 2 in line. Set cursor to diameter on Scale 4. Set arrow of Scale 2 under cursor. Read area on Scale 1 opposite $\frac{\pi d^2}{4}$ on Scale 2. **Reciprocals.**—Values of expressions such as $a$, $a^2$, $\sqrt{a}$, $\frac{1}{a}$, $\sin a$, etc., are easily obtained from Scales 1 and 2. Whatever the position of the dial the number on Scale 2 opposite arrow on Scale 1 is the reciprocal of the number on Scale 1 opposite arrow on Scale 2. **Common Logarithms.**—Find log. of 2675 on Scale 2 whose log. is required. Read mantissa of log. on log. Scale No. 3. The characteristic of the log. if positive is one less than the number of figures to left of decimal point; if negative one more than number of cyphers to right of decimal point. **Example:** Find log. 2675. Set cursor to 2675 on Scale 2. Read mantissa of log. 427 on log. scale. There are four whole numbers in 2675. Therefore the characteristic is 3 and the complete log. of 2675 is therefore 3.427. **Example:** Find log. of 50.75. Read mantissa of log. 7055 on log. scale. As characteristic is 1, log. 50.75 = 1·7055. **Hyperbolic Logarithms.**—These equal the common logs. of 100/100—number, log $e^n$ on Scales 1 and 2. **Nth Powers and Roots.**—These are got from the following relationship. If $A$ is a number and $n$ a $N$th power, $$\log x = n \log A$$ and the log. values are obtained as explained above. **Sines and Log. Sines.**—For angles 6° to 90°. Examples: Values of sine of 37° on Sine Scale. Read natural sine on Scale B, 670. Read log. sine on Scale of Logs. (-778). **Natural or Log. Cosines.**—Deduce from cosine of angle = sine of complement. Example: $\cos 60° = \sin 30°$. **Natural or Log. Tangents.** 6° to 45°. Set cursor to angle on Scale of Log. Tangents. Read natural tangent value on B and log. tan values on Log. Scale. **Tangents and Cotangents** can be deduced from $$\tan = \sin \quad \cotan = \cos \quad \cotan = \sin$$ $$\tan = \cos \quad \cotan = \sin$$ **Fractions to Decimals.**—Set numerator on Scale 1 to denominator on Scale 2. Read decimal value on Scale 1 opposite arrow on Scale 2. **Example:** 15. Set 15 on Scale 1 to 16 on Scale 2. Read .9375 on Scale 1 opposite arrow on Scale 2. **Decimals to Fractions.**—Set arrow on Scale 2 to decimal value on Scale 1. Read any fraction that coincides on Scales 1 and 2. **Example:** .875. Set arrow on Scale 2 to .875 on Scale 1. Read on Scales 1 and 2 $\frac{7}{8}$ or any other equivalent fraction such as $\frac{14}{16}, \frac{21}{24}, \frac{35}{40}$ etc. **Proportion.**—Set question in fractional form $\frac{a}{b} = \frac{x}{c}$ where $x$ is the unknown and may be the numerator or denominator as convenient. Then by cross multiplication $a \times c = b \times x$ and $x = \frac{a \times c}{b}$ **Example:** 15 men do a task in 28 days. In how many days should 21 men do it? Obviously more men will do it in less time and we get the proportion $\frac{21}{15} = \frac{28}{x}$ and $\frac{20 \times 15}{20 \times 15} = \frac{20 \text{ days}}{21} = \frac{21}{x}$ Set arrow on Scale 2 opposite 28 on Scale 1. Turn cursor to 15 on Scale 2. Turn dial till 21 on Scale 2 comes under cursor. Read answer 20 on Scale 1 over arrow on Scale 2. **Percentages.**—Example. A man receives £2 per cent. per annum on £15 10s. What is the rate per cent.? Reducing to shillings we get 49s. for 340s. If $x =$ rate of interest then $\frac{49}{340} = \frac{x}{100}$ or $x = \frac{100 \times 49}{340} = 14.4$ per cent. This is worked on the slide rule as previous example. **Example 2.** A machine A does work in 43 minutes which occupies a machine B 54 minutes. How much per cent. more efficient is A? Answer is found simply as the times occupied and taking B to represent 100 $$\text{Efficiency } A = \frac{54}{43} = \frac{x}{100}$$ $$\therefore x = 100 \times \frac{54}{43} = 125.6 \text{ worked as above Ex.} \frac{43}{43}$$ **Example 3.** Certain parcels contain respectively 8, 9, 24, and 32 articles. Express as percentages of the whole. $$\frac{8}{8+9+24+32} = \frac{73}{73} \text{ on B under 100 on A. Opposite 8, 9, 24, and 32 on B read 10-8%, 12-8%, 32-8%, and 48-8% respectively.}$$ Note.—The percentages of any number of groups can be read off in this way in one setting. **Gauge Points.** The gauge points, e.g., $\sqrt[3]{3}$, etc., round the circumference of the Scales can be used as factors in any multiplication or division relating to areas or circumferences of circles, etc., and by means of the cursor can be combined with square, sines, cosines and logs. in almost any desired calculation. **THE CIRCULAR CALCULATOR** This side of the instrument consists of a dial, and a cursor, rotated in each case by their opposite edges. A fixed red datum line is also provided. This side of the instrument is operated in a somewhat different manner to the slide rule, but will present no difficulty to the user after a little practice. Indeed it is quite possible that he will find it easier at first. The following may be taken of the use of the "Long Scale" No. 4 for multiplication and division. Simplicity of operation, and reading, is also the keynote of the Calculator, and to make this effective it should always be borne in mind that: - The Red Datum line is only used for the first multiplier and the final answer. - The Dial is turned only for multipliers. - The Cursor is turned only for divisors. - The scales marked on the end reading inwards are as follows: 1. The outer multiplying and dividing scale, 2. A scale of reciprocals of numbers on Scale 1. 3. A scale of logarithmic numbers on Scale 1. 4. Made up of 3 circles, which give the cube roots of numbers on Scale 1, and which may also be used as a "Long Scale" for multiplication and division. 5. A scale of signs of angles graduated round the inner, and then continued round the outer circumference of a common circle. The scale ranges from 35 minutes to 90 degrees. 6. A scale of tangents of angles from 5 deg. 45 mins. to 45 degs. Scale No. 1 is 3½ ins. diameter, and thus has a circumference of 9·62 ins. or practically the equivalent of a 10 inch straight ruler. Scale No. 4 has a total length of nearly 20 inches. Multiplication of Two Factors on Short Scale No. 1. Example 1: Multiply .0347 by 2.8. Set 347 on Scale 1 under red datum line. This lies between the 30 and 35; the exact point being the 9th division past the 30 to make the 345 and two-fifths of the next division to make the 347. Set cursor to 1. Set dial till 28 comes under cursor. Read answer (just over 97) on Scale 1 under datum. By visual inspection it will be seen that the answer must be in the neighbourhood of .099. Therefore we write our answer as given by the Calculator as .097. By actual multiplication the correct answer is .09716 showing how close is the approximation by the instrument. Multiplication of Two Factors on the Long Scale (No. 4). Example 2: Multiply 12.8 by 5.62. Set 128 under datum. Set cursor to 1. Set dial till 562 comes under the cursor (this is the first small division after the 56 on the Long Scale). Read answer just over 71.9 under datum. Multiplication of Three Factors on the Short or Long Scale. Example 3: The method is precisely the same whichever scale is used, so it will be described only for the Short Scale. Find product of .0347 x 2.8 x 63.5. Set 347 on Scale 1 (or Scale 4 if using Long Scale) under datum. Set cursor to 1. Set dial till 2.8 comes under cursor. All above settings are shown in Example 1. Set cursor to 1. Set dial till 63.5 comes under cursor. This is the 7th division past the 60. Read answer 6.17 under datum. The position of the decimal point is judged by inspection. By actual multiplication the correct answer is 6.16966 showing a close approximation by the instrument. Multiplication of Four or more Factors on the Short or Long Scale. Example 4: Find product of .0347 x 2.8 x 63.5 x 4.9. Proceed exactly as shown in Example 3 above to find product of .0347 x 2.8 x 63.5 and then again Set cursor to 1. Set dial till 4.9 comes under cursor. Read product under datum. This, if using the Short Scale, comes just short of midway between the 30 and the first division past the 30; and we should estimate the answer at 20-23 (midway being 20-25). The position of the decimal point is mentally estimated as follows: -- 2.8 is roughly 3 and 3 x .0347 is roughly 1.1, i.e. 63.5 is 6.35, and 6.35 x 4.9 is roughly 30.5 which gives an answer of 30.4. We know that the product will have two whole numbers and will come in the neighbourhood of 30. If the Long Scale, No. 4, is used instead of the Short Scale, No. 1, the succession of operations is precisely the same, but the setting calls for a little more care, as the factors are spread over a scale extending round three circles, and the answer may also be any one. The position of decimals is the same as mental calculation as explained above and in the problem above the answer (30-23) falls on the middle one of the 3 circles. Multiplication of an odd number of factors using Scales 1 and 2 in conjunction. Example 5: Find product of 8.42 x 16.16 x .422 (3 factors). Set 842 on Scale 1 under datum. Set cursor to 1616 on Scale 2. Set .422 on Scale 1 under cursor. Read answer 13.8 on Scale 1 under datum. By actual multiplication the correct answer is 57.42036. The decimal point is fixed mentally in this way: .422 is roughly .45; .45 x 8.42 is roughly 4, and 4 x 16.16 is roughly 65. Therefore there are two whole numbers in the answer. If we wish to find the product of an even number of factors we proceed as in the above example, but make it into an odd number by the addition of 1 as a factor, which does not alter the final result. Thus .354 x 29.4 x 63.6 x .862 should be worked as .354 x 29.4 x 63.6 x .862 x 1. Division on Short Scale.—Divide 7,256 by 13.85. Set 7256 on Scale 1 under datum. Set cursor to 13.85. Set 1 to cursor. Read answer 524 under datum. It is obvious by inspection that the answer will have three whole numbers, and so we fix the decimal point after the 4. The correct answer is 528.9, and when the example was worked out on the Long Scale this answer was obtained. Fractions.—Consider first a fraction with two factors in the numerator and one in the denominator and worked out on the Short Scale. Example 6: Solve 676.9 x 364 \[ \frac{114}{2} \] Set dial till 6769 comes under datum. Set cursor to 1142. Set dial till 364 comes under cursor. Read answer 2158 under datum. Correct answer is 2157.5 and when worked out on the Long Scale the answer came barely 2158. Consider now fractions with several factors in numerator and denominator. Example 7: Solve \( 19.5 \times \frac{66.6}{0.042} \) \[ \frac{8.9}{8.9} \] Work this as \( 19.5 \times \frac{66.6}{0.042} \) \[ \frac{8.9}{8.9} \times 1 \] Set 10-5 under datum. Set cursor to 8.9. Set 66-6 to cursor. Set cursor to 1. Set 1 to cursor. Read answer .613 under datum, decimal point being fixed by a rough calculation as previously described. (Correct answer is .61287.) Example 8: Solve \( \frac{13.8 \times 723.6}{15.8 \times 176 \times 2.42} \) This would be worked as \( 13.8 \times \frac{723.6}{15.8 \times 176 \times 2.42} \) and, as in Example 7, taking the factors alternately from the numerator and the denominator. Answer by Calculator 1.487. Correct answer 1.484 (a close approximation). Rapid Action with the Calculator.—As there are several ways of working a problem with arithmetic so there are several ways of using the Fourteen Calculators, and improvements may be curtailed by using the reciprocal Scale No. 2 in conjunction with the primary Scale No. 1, as shown in the following examples. Example 9: Solve \( \frac{6734}{9.6 \times 142.5} \) where there is an even number of factors in the denominator. Set 6734 on Scale 1 under datum. Set cursor to 96 on Scale 1. Set 1425 on Scale 2 under cursor. Read answer 4.92 on Scale 1 under datum (3 movements). (Position of decimal point is fixed mentally. Correct answer by multiplication and division is 4.923. Example 10: Solve \( \frac{4276}{3.42 \times 18.7 \times 32.02} \) Here the artifice of inserting the factor 1 is adopted to make the denominator contain an even number of factors thus:— \[ \frac{4276}{3.42 \times 18.7 \times 32.02 \times 1} \] Set 4276 on Scale 1 under datum. Set cursor to 342 on Scale 2. Set 187 on Scale 2 under cursor. Set cursor to 3202 on Scale 1. Set 1 under cursor. Read answer 2.050 on Scale 1 under datum (5 movements). Exercises with Reciprocal Scale No. 2. Example 11: Find decimal equivalent of \( \frac{1}{6.450} \) Set cursor over 8426 on Scale 1. Read 1548 on Scale 2 under cursor. From inspection of the fraction it is obviously between one-sixth and one-seventh and without hesitation we therefore write down its value as 0.1548. Example 12: Find decimal value of \( \frac{1}{3475} \) Set cursor over 3475 on Scale 1. Read 2878 on Scale 2. The fraction is manifestly less than \( \frac{1}{3000} \) and therefore will require 3 cyphers after the decimal point, so we write it as 0.009878. In setting 3475 under the cursor we note it falls between 34 and 35 and that between 34 and 35 there are two graduations each advancing 5 thus, 340, 345, 350. About half-way between 345 and 350 is 347 and a shade past this is 3475. Read on Scale No. 2 the cursor is just short of the value 288 and so we should estimate it as 2878 with answer as above. Note that in reading decimal values of fractions less than one-tenth there will be one cypher placed after the decimal point and preceding the number as read from the reciprocal scale. With values less than one-hundredth and greater than one-thousandth, 2 cyphers, and so on. Example 13: Find decimal value of \( \frac{5}{7} \) Set 5 on Scale 1 under datum. Set cursor to 7 on Scale 1. Set dial till 1 comes under cursor. Read 711 under datum. Example 14: Find fractional value of \( \frac{1428}{1428} \). Set (anti-clockwise) 1428 on Scale 2 under datum. Read 7 on Scale 1 under datum. Fractional value is therefore \( \frac{7}{1428} \). Example 15: Find fractional value of \( \frac{1}{0.0653} \). Set 653 on Scale 2 under datum. Read 158 on Scale 1 under datum. Fractional value is therefore \( \frac{1}{158000} \). Note.—As many cyphers must follow the 153 as there are cyphers following the decimal point in the given number. Use of Logarithmic Scale No. 3.— Example 16: Find log. of 2075. Set cursor over 2075 on Scale 1. Read mantissa on Scale 3. As there are 4 figures in the number all to left of decimal point the characteristic is positive and its value is 3. The complete log. is therefore 3·427. Example 17: Find log. of 0·023076. Set cursor over 2076 on Scale 1. This is about one-third of the way between 24 (which represents 240) and the first graduation after it which represents 242. Read mantissa 3815 on Scale 3. As the number is less than unity the characteristic is negative, and as there are 4 cyphers to the right of the decimal point the value is 2. Therefore the log. of 0·023076 is 2·3815. Hyperbolic Logarithms.—These equal the common logarithms multiplied by loge marked as a gauge point round the outer circle. The characteristic of the common log. of the number is first obtained using Scales 1 and 3, and after insertion of the characteristic it is multiplied by the factor loge10 in the manner already described. Examples of Power and Roots.— Example 18: Find value of (36·7)². This can be done by multiplying 36·7 by 36·7 or by the method below. Set 36·7 on Scale 1 under datum. Set cursor over 36·7 on Scale 2. Turn dial till same number comes under cursor. Read 1347 under datum on Scale 1. Example 19: Find value of (16·4)⁴. Can be done by multiplying 16·4 × 16·4 × 16·4, or by first finding the square as in Example 18 and then multiplying this result on Scale 1 by 16·4. Finding Nth Powers and Mth Roots of Numbers with Logarithms. Let A be a number and suppose x = An. Where n may be a whole number or a fraction. Then log. x = n log. A. Example 20: Find 5th Root of 51·52, i.e., find (51·59)¹/₅. Here n = 1/5 and A = 51·53. Set cursor over 51·53 on Scale 1. This is between the 3rd and 4th graduations after 50. Read mantissa of 0·813 on Scale 3, viz., 713. The number is more than unity, therefore the log. is positive. There are two figures to left of decimal point therefore value of the characteristic is 1. Therefore log. 51·53 = 1·713. One-tenth of log. 51·53 is 0·0813. Set cursor over 3426 on Scale 3 and read fifth root of 51·53 on Scale 1, viz., 2·2. Example 21: Find value of (2·8)⁴, i.e., a = 2·8 × 2·8 × 2·8 × 2·8. First method. By taking logs. Set 28 on Scale 1 under cursor. Read mantissa of log. on Scale 3, viz., .447. As only one figure to left of decimal point in 2·8 there will be no characteristic. Multiply .447 by 4 mentally to get 4(logo. 2·8). This equals 1·788. .788 is therefore mantissa of log.(2·8)⁴. Set 788 on Log. Scale 3 under cursor. Read 614 under cursor. The answer will have two whole numbers in front of decimal point. Therefore (2·8)⁴ = 61·4. Second method. Multiply 2·8 by itself four times on Scale 1. Third method. Multiply 2·8 by itself four times, and then by 1, using Scales 1 and 2 in conjunction as previously described. Square Roots.— Example 22: Find the square root of 1849. Set 1849 on Scale 1 under datum. Set cursor to 1. Turn dial until the same number comes simultaneously under the datum on Scale 1 and the cursor on Scale 2. This number, 43, is the square root of 1849. It will be observed that two answers could be obtained when setting in this manner. For instance one can get either 43 coming on Scales 1 and 2, when the unity line on the dial comes opposite the mid-point between the datum and cursor; or we could get 13·6 when the unity line falls midway between the datum and cursor. This second value, 13·6, is the square root of the original number 1849 multiplied by the square root of 10. Thus 13·6 = √1849 × √10. If we had to find the fourth root of a number, we should first find its square root, as in example above, and then find the square root of this square root. Cube Roots.—Can be read directly from Scale 1 on one of the three circles which comprise the Long Scale, No. 4. Example 23: Find cube root of 964. Set 964 on Scale 1, under datum. Read 9876 on the outer of the 3 circles of Scale 4. The cube root is therefore 9·876. Example 24: Find cube root of 1430. Set 1430 on Scale 1 under datum. Read 11278 on the inner of the 3 circles of Scale 4. It is obvious that the cube root lies between 10 and 20 and therefore must be read on the inner circle. Other roots can be obtained by taking logarithms as in Example 20, or if a sixth root was required it could be obtained by taking the square root of the cube root of the number. Sines, Tangents, etc.—The values of sines, tangents, etc., are read from the scales of angles No. 5 and No. 6 by means of the cursor. Natural Sin. or Natural Tan. on Scale 1. Log. Sin. or Log. Tan. on Scale 3. Cosine, Cotangent, Secant, and Cosecant are deduced from the following relationship: \[ \cos A = \sin (90^\circ - A); \quad \cot A = \frac{1}{\tan A} \] \[ \sec A = \frac{1}{\cos A}; \quad \csc A = \frac{1}{\sin A} \] The Scale of Sines No. 5 extends twice round the circumference of the circle, the inner gives angles between 35 mins. and 5 degs. 45 mins. and its value increases from 0·9 to 0·10, and the outer gives angles between 5 degs. 45 mins. and 90 degs. and values increase from 0·10 to 1·0. Example 25: Find value of Natural Sin. of 4° 40'. Set cursor over 4° 40' on Scale 5. Read Natural Sin. 0·0813 on Scale 1. The value lies on the inner scale at 0·813, but as the sines of all angles on the inner circle are between 0 and 0·1 we write down the value as 0·0813. Example 26: Find value of Natural Sin. of 20° 30'. Set cursor over 20° 30' Scale 5. Read Sin. 0·3426 on Scale 1. The angle being on outer circle of Scale 5 and exceeding 5° 45', its value lies between 0·1 and 1·0. Between 20° and 25° the scale is graduated at intervals of 20' so that 20° 30' falls midway in the second interval, following 20°. Example 27: Find value of Cosecant 20° 30'. Set as above and read value \[ \frac{1}{\sin 20° 30'} \] which is \[ \csc 20° 30' \] on Scale 2. This value is reading anti clockwise 2·855. Example 28: Find value of cosine 48°. This equals sin (90° - 48°) = sin 42°. Set 42° Scale 5 under cursor. Read cosine 48° on Scale 1 under cursor = 0·69. On the opposite Scale No. 5 is shown the value of the secant of 48° which equals 1·436. Example 29: Find value of Tan 25° and Cotan 25°. Set 25° on Scale 6 under cursor. Read tan 25° = .466 on Scale 1 and cotan 25° on Scale 2 = 2·144 under cursor. Note: If other values of functions are required they must be read on the Log. Scale No. 3. In reading the values of log. sines of angles the characteristic of the logs for all angles between 35 mins. and 5 degs. 45 mins. is 0, and for all angles between 5 degs. 45 mins. and 90 degs. is 9. The mantissa only of the log. is read on Scale 3. Example 30: Find value of Log. Sin 27° 20'. Set cursor over 27° 20' Scale 5. Read mantissa .662 on Scale 3. Cotangent also will be therefore 9·062. Example 31: Find value of Log. Sin 4° 26'. Set cursor over 4° 25' on Scale 5. Read mantissa .8865 on Scale 3. Complete log. thus is therefore 8·8865. Measurement of Circles.— Example 32: Find area of circle 31 ins. diam. Area = \( \pi d^2 \times \frac{\pi}{4} \) = 3·5 × 3·5 × 7·854. Set 3·5 on Scale 1 under datum. Set cursor to 3·5 on Scale 2. Turn dial till gauge point \( \pi \) on outer circle comes under cursor. Read area 9·62 square inches on Scale 1 under datum. Example 33: Find circumference of circle 9·3 ins. diameter. Set 93 on Scale 1 under datum. Set cursor to 1. Turn dial till \( \pi \) (gauge point on outer circle) comes under cursor. Read circumference 29·2 under datum Scale 1. It can of course be worked out on the Long Scale if greater accuracy is required when 29·33 will be obtained. Example 34: Find diameter of a circle of area 227 square inches. Diameter = \( \sqrt{\text{area}} \times C \). \( C = 1·2838 \) and is marked as a gauge point on outer circle. Set 227 on Scale 1 under datum. Set cursor to 1. Turn till dial same number comes under datum on Scale 1, as under cursor on Scale 2. This is square root of 227. Turn cursor to 1. Turn dial till C comes under cursor. Read answer 17 under datum on Scale 1. Discount.— Example 35: What is the wholesale price of an article subject to 12½ per cent., the retail price of which is 52·6? Set 52·6 to 100. Set cursor to 52·6 (52/6). Turn dial to 87·5, 12·5 divisions backwards, representing 12½ per cent.). Read nearly 46 under cursor, which we should estimate as 46/11.
Bioavailability of Beclomethasone From Two HFA-BDP Formulations With a Spacer Amira SA Said, Salahdein AbuRuz, and Henry Chrystyn **BACKGROUND:** The drug delivery characteristics of each inhaler/spacer combination are unique. The spacer size as well as the presence of electrostatic charge greatly influence the inhaler dose emission and in vivo delivery. Using a previously developed urinary pharmacokinetic method, we have measured the relative lung and systemic bioavailability of beclomethasone dipropionate (BDP) after inhalation from 2 hydrofluoroalkane-beclomethasone dipropionate (HFA-BDP) formulations when used with a spacer. **METHODS:** 12 healthy volunteers received 8 randomized doses, separated by 7 d, of inhaled of BDP with either the Clenil pressurized metered-dose inhaler (pMDI; 250 μg) or the breath-actuated Qvar Easi-Breathe inhaler (100 μg), used alone or with a spacer. The urinary amounts of BDP excreted and retained in the spacer were assayed using a liquid chromatographic mass spectrometer. The spacer was assessed after washing with a detergent solution that was either rinsed or not rinsed with water. In addition, the aerodynamic characterization of each inhaler/spacer combination was assessed using the Andersen Cascade Impactor operated at 28 L/min using a 4-L inhalation volume. The amount of BDP deposited in the induction port, spacer, and various Anderson Cascade Impactor stages were determined. **RESULTS:** The in vivo 30-min urinary excretion and the in vitro fine particle dose results were only slightly affected by adding the spacer to the Clenil pMDI or the Qvar Easi-Breathe inhaler. However, the spacer significantly reduced drug particle impaction in the oropharynx and minimized deposition in the gastrointestinal tract. Therefore, using spacers with BDP inhalers is associated with a more favorable therapeutic ratio because it has little effect on lung dose, but it significantly reduced throat deposition. An improved lung deposition was achieved with non-rinsed spacers compared to spacers rinsed with water. **CONCLUSION:** The difference in the BDP particle size between formulations as well as spacer size greatly affected drug deposition in different regions of the respiratory tract. *Key words:* beclomethasone dipropionate; urinary excretion; inhalation; spacers; relative lung bioavailability. [Respir Care 2019;64(10):1222–1230. © 2019 Daedalus Enterprises] **Introduction** Inhaled corticosteroids (ICS) have long been recognized as the cornerstone anti-inflammatory agent for asthma management in both adults and children as recommended by the British Guideline on the Management of Asthma.\(^1\) ICS can improve lung function, control symptoms, increase exercise capacity, and reduce disease flare-ups. Yet many factors can influence the effectiveness of ICS, such as the aerosol-generating system, particle size distribution of the inhaled aerosol, and the patient inhalation pattern. Drs Said and AbuRuz are affiliated with the Clinical Pharmacy Department, College of Pharmacy, Al Ain University of Science and Technology, Al Ain, United Arab Emirates. Prof Chrystyn is affiliated with Inhalation Consultancy Limited, Leeds, United Kingdom. Dr AbuRuz, Department of Clinical Pharmacy, College of Pharmacy, Al Ain University of Science and Technology, Al Ain, United Arab Emirates, Department of Clinical Pharmacy, Faculty of Pharmacy, University of Jordan, Amman. The authors have disclosed no conflicts of interest. Correspondence: Amira SA Said PhD, Department of Clinical Pharmacy, College of Pharmacy, Al Ain University of Science and Technology, Al Ain, United Arab Emirates. E-mail: firstname.lastname@example.org. DOI: 10.4187/respcare.06689 Despite the fact that most patients cannot demonstrate a correct inhalation technique, the pressurized metered-dose inhaler (pMDI) is still the most commonly prescribed inhaler device in clinical practice.\textsuperscript{2} Patients frequently fail to synchronize aerosol actuation with inhalation or inhale slowly after activation of the inhaler. Traditional pMDIs can deliver less than one third of the emitted dose to the lung, with the rest of medication being deposited in the oropharynx.\textsuperscript{2} The development of spacers was an important addition to pMDIs because larger drug particles are retained on spacer walls by impaction, thus reducing the oropharyngeal deposition. As a result, patients may experience fewer local side effects from steroid aerosols, such as oral thrush, voice hoarseness, coughing, and throat discomfort.\textsuperscript{3,4} For beclomethasone inhaled therapy, reducing oropharyngeal deposition is of critical importance because this drug has low first-pass metabolism compared to other ICSs. Thus, high oropharyngeal deposition can contribute to systemic side effects without any increase in clinical benefit. In addition, spacers increase the time required for propellant evaporation and reduce both the size and speed of the aerosol particles. Spacers reduce the need for patient coordination between actuation and inhalation of the aerosol.\textsuperscript{5} However, spacers can improve lung drug delivery only in patients with poor inhalation techniques; no additional benefits were observed in patients with good inhalation techniques.\textsuperscript{6} Different spacer/inhaler combinations will have different drug-delivery characteristics. Therefore, for optimal device selection, the delivery characteristics for each of these combinations should be fully assessed. Currently, 2 brands of hydrofluoroalkane-beclomethasone dipropionate (HFA-BDP) pMDIs are available in the United Kingdom: Clenil Modulite (Chiesi Limited, Manchester, United Kingdom) and QVAR (Teva Pharmaceutical Industries, Petah Tikva, Israel). Because these aerosols are not equipotent, the Medicines and Healthcare Products Regulatory Agency advised that HFA-BDP pMDIs should be prescribed by brand name to limit confusion and avoid errors in prescribing. On the other hand, Clenil Modulite is equivalent to Becotide (GlaxoSmithKline, Brentford, United Kingdom), a chlorofluorocarbon (CFC)-BDP innovator product, thus a straightforward substitution of doses can be performed.\textsuperscript{7} The incorporation of BDP in a solution form in the QVAR inhaler allowed the efficient delivery of extra-fine particles that resulted in a 2–2.5-fold increase in efficacy compared to other BDP pMDI brands.\textsuperscript{8} Formulations rich in superfine particles such as QVAR (1.1 µm) would be expected to provide higher lung deposition and less oropharyngeal impaction. Indeed, improved penetration of these small particles into both large and small airways would offer better bronchoconstriction relief and inflammatory control throughout the respiratory system. High lung-deposition values of > 50% were only possible through the introduction of HFA-solution technology because dose emission from spacers is mainly dependent on the drug,\textsuperscript{9} the formulation,\textsuperscript{2,10} the spacer size\textsuperscript{2,11} and its level of the electrostatic charge.\textsuperscript{12–14} In this study, we compared the relative lung bioavailability of beclomethasone from the Clenil pMDI (250 µg, 2.9 µm) and the Qvar Easi-Breathe (100 µg, 1.1 µm) when used with a spacer. The spacer is a plastic tube that is 2.5 × 3.5 cm, with an overall length of 10 cm and a volume of 50 mL (Norton Healthcare, Harlow, United Kingdom, and GlaxoSmithKline). The relative lung and systemic bioavailability of beclomethasone after inhalation as measured with a urinary pharmacokinetic model has been previously reported.\textsuperscript{15} Based on this model, 3 indices can be used to describe the relative amounts of BDP deposited in the lung: the 30-min urinary excretion of either BDP, beclomethasone, or beclomethasone 17-monopropionate. The 24-h urinary BDP excreted and its metabolites allows an estimate of the total systemic bioavailability after inhalation. **Methods** **Washing of Spacers** All methods were performed in accordance with relevant regulations and guidelines. To study the effect of electrostatic charges that build up inside the spacer, the spacer was evaluated after washing with a detergent solution (Fairy Liquid, Procter & Gamble, London, United Kingdom) and then either subsequently rinsed or not rinsed with water. The spacer was left to dry at room temperature before each study. **In Vitro** According to the method mentioned in the British Pharmacopoeia (2005),\textsuperscript{16} the Andersen Cascade Impactor (Copley, Scientific Ltd, United Kingdom) operating at 28 L/min with a 4-L inhalation volume was used to characterize the emitted dose from the Clenil and Qvar aerosols. Two actuations from the 250-\(\mu\)g Clenil pMDI or 4 actuations from the 100-\(\mu\)g Qvar Easi-Breathe were introduced into the impactor for each inhaler or inhaler/spacer combination. For each inhalation method, 5 separate determinations were made. The amount of BDP deposited in the spacer, induction port, and different stages of the Andersen Cascade Impactor were measured using a previously validated liquid chromatographic mass spectrometric method.\textsuperscript{15} The mass median aerodynamic diameter (MMAD), fine particle dose (FPD), and total emitted dose (TED) were calculated for each inhaler with and without the spacer using CITDAS software (Copley Scientific, Colwick, United Kingdom). The TED is the total amount of drug collected from the mouthpiece, and it is expressed with respect to the nominal dose. The FPD is the cumulative amount of drug particles with size of \(< 5 \ \mu\)m. The MMAD is the particle size corresponding to 50% of the dose deposited in the Andersen Cascade Impactor. **In Vivo Study** Ethical approval for the in vivo study was granted from the ethics committee at the University of Huddersfield, Huddersfield, United Kingdom. Twelve healthy, nonsmoking adults (6 male) age \(\geq 18\) years with an average FEV\(_1\) > 90% predicted consented to participate in the study. In an open-label study design, subjects were randomly assigned to different treatment categories by utilizing a table of random numbers to reduce potential bias. It was previously reported that utilizing randomization was found to minimize bias to a greater extent than blinding in inhalation medications studies.\textsuperscript{17} On separate days, each subject inhaled 8 doses of BDP from either a 250-\(\mu\)g Clenil Modulite pMDI or a100-\(\mu\)g Qvar Easi-breathe used alone or when attached to a spacer. The spacer arm was further divided to rinsed spacer or unrinsed spacer after washing with a detergent. A randomized order of inhalation doses was administered with a 7-d washout period between each study inhalation. All participants were trained on the correct inhalation technique as recommended by the manufacturer. For the Clenil pMDI, the participants were instructed to breathe out as far as comfortable, and then, with the inhaler placed between the lips, participants were instructed to actuate the inhaler and breathe in at the same time for the full inhalation. Last, the inhaler was removed and participant held their breath for at least 10 s, followed by slow exhalation. The same inhalation procedure was repeated for the Easi-Breathe device, except that subjects were instructed to skip the coordination step between actuation and inhalation because the inhaled dose was automatically delivered during inspiration with the breath-actuated inhaler. This slow inhalation procedure continued over 3–5 s until total lung capacity was reached. Different checkpoints were monitored to ensure that the breath-actuation step occurred by checking sound and taste and by witnessing the movement of the device’s external lever with the dose release. Subjects were instructed to hold their breath for 10 s after inhalation, and the next dose was inhaled 30 s later.\textsuperscript{18} For inhaler uses with the spacer, all participants were trained to successfully master the inhalation technique with the spacer per manufacturer’s instruction. In summary, participants were instructed to exhale as much as possible, then to actuate the dose into the spacer followed by slow and deep inhalation for about 3–5 s, and finally to hold their breath for about 10 s. Repeated doses were separated by 30 s. All subjects were instructed to empty their bladder before each study. Urine sample collection was carried out at 30 min after inhalations, and then cumulatively for 24 h after inhalation. All collected urine samples were frozen at \(-20^\circ\)C for subsequent analysis. The amounts of BDP excreted in the urine and its metabolites, as well as drug amounts retained in each spacer, were determined using a previously validated liquid chromatographic mass spectrometric method.\textsuperscript{15} According to pre-study calculations, the selected sample size in each study group to obtain an 80% power to detect a 40% difference in lung dose was 12 subjects. Statistical analysis of the 30-min and cumulative 24-h urinary excretion of BDP inhaled from each inhaler or inhaler/spacer combination were performed using a 2-way analysis of variance test using SPSS V17.0 (SPSS, Chicago, Illinois). In addition, a 1-way analysis of variance with Bonferroni correction was used to compare the urinary excretions of the different inhaler combinations. Equivalence between different inhalation methods was identified by normalizing the 30-min and cumulative 24-h urinary excretions for the nominal dose and then log transformed. From the mean square error of the analysis of variance, using subjects and inhalation method as the main factors, the mean ratio (90% CI) was calculated. As cleared by the FDA, the 90% CI for the mean ratios with a range of 80–120% is As presented in Table 1, using the spacer significantly reduced ($P < .001$) systemic delivery with both inhalers. The 24-h urinary BDP significantly decreased ($P < .001$) from 30.2 (6.6) with the Clenil pMDI alone to 17.4 (2.3) and 14.7 (1.8) with Clenil + unrinsed spacer and Clenil + rinsed spacer, respectively. With the Qvar Easi-Breathe, the 24-h urinary BDP amount was significantly reduced ($P < .001$) from 23.4 (3.9) to 15.3 (3.5) and 11.0 (2.5) with Qvar + unrinsed spacer and Qvar + rinsed spacer, respectively. All values are expressed in $\mu g$. Similarly, in vitro data showed significant reductions ($P < .001$) in TED from 381.8 (6.3) for the Clenil pMDI alone to 163.4 (15.2) and 112.5 (8) for Clenil + unrinsed spacer, Clenil + rinsed spacer, respectively, and from 372.6 (27.1) for the Qvar Easi-Breathe alone to 207.5 (9.6) and 138.8 (16.5) for Qvar + unrinsed spacer and Qvar + rinsed spacer, respectively. All values are expressed in $\mu g$. The data showed that more in vitro TED and in vivo 24-h urinary drug amounts were excreted with the unrinsed spacers compared to the spacers rinsed with water after detergent use. This is in correspondence with the more significant in vitro and in vivo retained drug amounts in the rinsed spacer compared to the unrinsed one. On the other hand, the 30-min urinary drug amounts ($P < .05$) and the in vitro FPD were reduced when using the spacer with Clenil. However, greater decreases in lung deposition was encountered with the rinsed spacers compared to unrinsed ones. The mean (SD) in vitro FPD values were 97.6 (20.8), 93.3 (17.6), 62.7 (8.2), and the mean (SD) 30-min urinary BDP values were 3.7 (0.6), 3.6 (0.6), 3.3 (0.6) for the Clenil pMDI, Clenil + unrinsed spacer, and Clenil + rinsed spacer, respectively. All values are expressed in $\mu g$. The values of 30-min urinary BDP excreted and FPD after inhalation of Qvar Easi-Breathe study doses were similar to those with Qvar + unrinsed spacer, and significantly higher than those with Qvar + rinsed spacer. The mean (SD) value of FPD after inhalation of 218.0 (29.1) for the Qvar Easi-Breathe study doses was similar to that for Qvar + unrinsed spacer at 179.6 (15.1) but significantly higher than that for Qvar + rinsed spacer at 121.9 (20.9). In the same manner, the mean (SD) 30-min urinary BDP value of 3.5 (0.5) for the Qvar Easi-Breathe was similar to that for Qvar + unrinsed spacer at 3.4 (0.8) but significantly higher than that for Qvar + rinsed spacer at 3.0 (0.6). All values are expressed in $\mu g$. The statistical comparison of the results is shown in Table 2, which represents the mean difference (95% CI) for the percent of nominal dose of BDP excreted at 30 min and 24 h after study doses with and without a spacer. Table 3 presents a summary of the mean ratio (90% CI) of BDP amounts between the 2 inhalers with and without the spacer with respect to the nominal dose. The overall mean Table 1. In Vivo and In Vitro Data After Inhalation of 8 Doses of BDP With and Without Spacer | Inhaler | Clenil pMDI (100 μg) | Qvar EB (250 μg) | |--------------------------|----------------------|------------------| | **In vitro study** | | | | Spacer | None | Unrinsed | Rinsed | None | Unrinsed | Rinsed | | Induction port | 251.3 (22.0) | 28.7 (7.1) | 24.3 (6.7) | 121.8 (15.5) | 7.5 (3.6) | 3.6 (1.0) | | Spacer deposition | NA | 240.9 (26.6) | 305.5 (33.9) | NA | 126.4 (8.1) | 191.2 (22.9) | | TED | 381.8 (6.3) | 163.4 (15.2) | 112.5 (8.0) | 372.6 (27.1) | 207.5 (9.6) | 138.8 (16.5) | | FPD | 97.6 (20.8) | 93.3 (17.6) | 62.7 (8.2) | 218.0 (29.1) | 179.6 (15.1) | 121.9 (20.9) | | MMAD | 2.8 (0.4) | 3.1 (0.2) | 3.3 (0.3) | 1.2 (0.2) | 1.0 (0.2) | 1.1 (0.2) | | **In vivo study** | | | | Spacer | None | Unrinsed | Rinsed | None | Unrinsed | Rinsed | | 30-min urinary BDP | 3.7 (0.6) | 3.6 (0.6) | 3.3 (0.6) | 3.5 (0.5) | 3.4 (0.8) | 3.0 (0.6) | | 24-h urinary BDP | 30.2 (6.6) | 17.4 (2.3) | 14.7 (1.8) | 23.4 (3.9) | 15.3 (3.5) | 11.0 (2.5) | Data are presented as mean (SD). Values are quoted in μg except MMAD (μm). In the in vitro study, 5 trials were performed on the Andersen Cascade Impactor. In the in vivo study, 12 healthy subjects participated. BDP = beclomethasone dipropionate pMDI = pressurized metered-dose inhaler Qvar EB = Qvar Easi-Breathe MMAD = mass median aerodynamic diameter TED = total emitted dose FPD = fine particle dose Table 2. Mean Difference for the Percent of Nominal Dose of BDP Excreted After Study Doses With and Without Spacer | Comparator | BDP 30 min After Study Doses | BDP 24 h After Study Doses | |-----------------------------------|------------------------------|----------------------------| | Qvar EB vs Clenil pMDI | 0.3 (0.2 to 0.3)* | 1.4 (1.2 to 1.7)* | | Clenil-unrinss vs Qvar EB-unrinss | −0.2 (−0.3 to −0.2)‡ | −1.0 (−1.3 to −0.8)‡ | | Clenil-rinsed vs Qvar EB-rinsed | −0.2 (−0.3 to −0.2)‡ | −0.6 (−0.9 to −0.4)‡ | | Qvar EB vs Qvar EB-unrinss | 0.1 (−0.2 to 0.5)§ | 8.1 (6.0 to 10.3)‡ | | Qvar EB vs Qvar EB-rinsed | 0.5 (0.2 to 0.8)‡ | 12.4 (10.3 to 14.6)‡ | | Clenil vs Clenil unrinss | 0.1 (−0.2 to 0.3)§ | 12.8 (10.6 to 15.1)‡ | | Clenil vs Clenil rinsed | 0.4 (0.1 to 0.7)¶ | 15.5 (13.3 to 17.7)‡ | Data are presented as mean difference (95% CI). * P < .05. † P < .01. ‡ P < .001. § No significant difference. BDP = beclomethasone dipropionate Qvar EB = Qvar Easi-Breathe pMDI = pressurized metered-dose inhaler Table 3. Mean Ratio of BDP Excreted With or Without Spacer (Normalized for the Nominal Dose) | Cumulative Urinary Excretion | BDP 30 min After Study Doses | BDP 24 h After Study Doses | |------------------------------|------------------------------|----------------------------| | Qvar EB vs Clenil | 242.5 (212.5–276.8) | 196.0 (171.8–223.7) | | Qvar EB spacer vs Clenil spacer | 231.9 (205.0–262.5) | 216.4 (189.5–246.9) | | Qvar EB vs Qvar EB spacer | 105.9 (96.2–116.6) | 155.0 (136.3–176.1) | | Clenil vs Clenil spacer | 101.2 (95.3–107.6) | 171.1 (154.8–188.9) | Data are presented as mean difference (90% CI). BDP = beclomethasone dipropionate Qvar EB = Qvar Easi-Breathe Discussion The results of this study have demonstrated appreciable differences in urinary drug excretion and aerodynamic particle size distribution of different HFA formulations of the same drug when used with the same spacer. The difference in the particle size of these formulations (Qvar Easi-Breathe, 1.1 μm vs Clenil pMDI, 2.9 μm) and the size of the spacer used greatly affected drug deposition in different regions of the respiratory tract. In this study, both in vitro and in vivo results of inhaled BDP using the small volume spacer in both the Clenil pMDI and the Qvar Easi-Breathe significantly reduced the total systemic drug delivery. Moreover, addition of the spacer significantly reduced lung deposition with the Clenil pMDI, while it did not affect lung deposition with the Qvar Easi-Breathe. Indeed, one of the most critical factors that affect the efficiency of asthma inhalation therapy is the inhaler device’s ability to target the drug to the lung with minimal deposition to unwanted sites. Therefore, using spacer devices with asthma aerosols, especially ICS, is highly recommended to reduce oropharyngeal deposition, overcome the coordination problem between actuation and breathing, and improve overall lung drug delivery.\textsuperscript{6} Using the small volume spacer significantly reduced oropharyngeal deposition by both the Clenil pMDI and the Qvar Easi-Breathe, suggesting that the spacer substantially reduced the amount of drug deposited in the oropharynx by eliminating large particles deposition. Most of the large, non-breathable steroid particles deposited on the spacer walls, leaving only small, fine particles to reach the lung. This was clearly indicated by the lower 24-h urinary excretions of BDP ($P < .001$) and the lower amount of drug deposited in the induction port of the impactor ($P < .001$), which is considerably important as it represents the oropharyngeal cavity of the patient. This decrease in systemic delivery of drug is due to deposition of part of the dose on the walls of the spacer itself instead of deposition in the mouth.\textsuperscript{21} Spacers can trap large particles and allow smaller particles to pass through to the patient, thus depositing only a small fraction of the inhaled dose in the oropharynx. In this study, analysis of in vitro and in vivo data clearly indicated that BDP inhaled from either the Clenil pMDI or the Qvar Easi-Breathe in combination with a spacer significantly decreased oropharyngeal deposition. This finding is supported by 2 important markers: lower 24-h urinary BDP ($P < .001$) and less accumulated drug in the impactor induction port ($P < .001$), which represents the oropharyngeal cavity. The spacer was able to improve the drug delivery of fine particles to the lung and reduce the travel of large particles to the oropharynx. This in turn resulted in lower systemic and local side effects of inhaled BDP.\textsuperscript{22} Indeed, the higher in vitro TED for the Clenil pMDI and the Qvar Easi-Breathe compared with that when the unrinsed spacer was attached translated into higher in vivo systemic drug delivery to the main circulation. This finding agrees with many previous in vitro\textsuperscript{8,22} and in vivo\textsuperscript{23–25} studies reporting that the use of spacers with pMDIs produce higher drug delivery to systemic circulation. The fact that the spacer decreased systemic delivery with either inhaler is of critical importance for ICS because it reduces the occurrence of local side effects in the upper respiratory tract, such as oral thrush and candidiasis, and it reduces the systemic side effects of ICS due to minimum oral absorption.\textsuperscript{26} However, both in vitro and in vivo studies results revealed that using a spacer with the Clenil pMDI significantly reduced its lung deposition, but this did not affect lung deposition by the Qvar Easi-Breathe inhaler. This may be attributed to the differences in the emitted aerosol particle size from these 2 formulations. The Qvar Easi-Breathe inhaler has been designed to produce an aerosol with a smaller particle size (1.1 $\mu$m MMAD). On the other hand, the Clenil inhaler was originally designed to produce an aerosol particle size of 2.9 $\mu$m MMAD. This was achieved by adding a nonvolatile aerodynamic modulator to the HFA-BDP solution to increase the particle size.\textsuperscript{6} The addition of the spacer to the drug with the larger particle size as produced by the Clenil inhaler would be more beneficial in enhancing proper evaporation thus may confer further particle-size reduction before inhalation. However, small volume spacers have increased the likelihood of spacer wall impaction due to greater plume velocity. This is may be more critical regarding the Clenil pMDI with its larger particle size, where the smaller size of the spacer may not be sufficient to allow complete evaporation of the aerosol propellant before reaching the lung. Furthermore, with the smaller spacer, any delay in breath-actuation coordination can lead to more loss to drug-spacer wall impaction. Thus the use of this spacer may actually make the breath-actuation coordination more critical to patient lung delivery. In contrast, the Qvar Easi-Breathe is a breath-actuated device that has been devised with a flow-triggered system driven by a spring that automatically releases the dose with the patient’s inhalation.\textsuperscript{27,28} It was designed to overcome the problem of coordination between actuation and breathing. Actuation of the aerosol occurs at low inhalation flows of approximately 20 L/min. This low inspiratory flow is attainable by most patients, even those with obstructive air-flow diseases. Furthermore, the drug delivered by the Qvar Easi-Breathe is relatively stable regardless of increasing inspiratory effort.\textsuperscript{29,30} It was previously reported that good hand–breath coordination was only achievable with large volume spacers and not small volume spacers.\textsuperscript{31,32} Thus, using a small spacer with the Qvar Easi-breathe, where such coordination is no longer a requirement, would be more convenient and appropriate. This device can easily maintain the extra-fine properties of these formulations, with little effect on lung deposition while avoiding the inconvenience of large volume spacers. The above results mean that patients with asthma could achieve similar BDP lung deposition with the Qvar Easi-Breathe alone or via the unrinsed spacer, but with a spacer attachment they will receive the benefit of reduced total systemic ICS delivery. This is in accordance with several previous studies reporting that using high-dose ICS in conjunction with a spacer will reduce the systemic side effects of the medication without affecting the beneficial effect of controlling asthma symptoms.\textsuperscript{33–35} Similarly, other studies reported that using HFA formulations with small tube spacers (50 mL) markedly reduced oropharyngeal deposition without affecting lung deposition\textsuperscript{35} or with increased lung deposition.\textsuperscript{36,37} Our findings suggest that, with an extra-fine aerosol formulation such as Qvar, there is no need to use a large volume spacer because using a small volume spacer maintains the extra-fine properties of the aerosol without the need to use an inconvenient large volume spacer. This implies that the optimal spacer length effect is limited to a particular pMDI and cannot be predicted with others inhalers. Therefore, each pMDI formulation/spacer, even if it contains the same drug, needs to be fully evaluated to guide the optimal device selection. The results of this study coincide with the British Thoracic Society recommendations for asthma management, which state that using spacers for delivering high doses of inhaled beclometasone is desirable because it significantly reduces the unwanted systemic effect of ICS without compromising its efficacy.\textsuperscript{1} Currently, clinical guidelines for the management of asthma encourage the use of spacers with asthma aerosols, especially ICS.\textsuperscript{1} The incorporation of spacers in the management of asthma can improve patients’ outcomes, because spacers are easy to use, they reduce ICS systemic and local side effects, and they require less treatment time and cost. However, an inherent problem with plastic spacers is their dose inconsistencies, which might arise from the tendency of the plastic material to variably accumulate electrostatic charge on surfaces during handling. In addition, the new HFA-containing formulations are more prone to develop electrostatic charges compared to aerosols containing CFCs.\textsuperscript{38–40} The mutual repulsion between such highly charged aerosol particles with the inherent plastic spacer electrostatic charge causes significant drug deposition on the spacers’ walls. Consequently, inhaled drugs will be remarkably retained within these devices, causing a significant reduction of the respirable drug dose. However, the problem of accumulation of electrostatic charges on spacer walls can be minimized by a few methods, such as washing the spacer with detergent solution without a final water rinse,\textsuperscript{13} using metal spacers,\textsuperscript{41} and actuating a few puffs into the spacer before use.\textsuperscript{42,43} Although metal spacers do not require washing with detergent and may resolve the problem of accumulation of electrostatic charges, plastic spacers are still the devices of choice because they cost less. In addition, it has been argued that the non-transparency of such metal spacers and the inability to see the aerosol plume created might affect patient adherence to treatment.\textsuperscript{44} In addition, priming of plastic spacers with multiple actuations may minimize the accumulation of electrostatic charges, but only in formulations that contain surfactant.\textsuperscript{12,13,45} Therefore, detergent-coated spacers represent a simple, practical, and inexpensive method for effective electrostatic charge reduction. Although some manufacturers and regulatory agencies have advocated subsequent rinsing of detergent-coated spacers with water to avoid contact dermatitis from the detergent, this rinsing unfortunately washes the detergent from the spacer walls and results in less protection against the development of electrostatic charges. As shown in our results, washing the spacer with detergent without a final rinse yielded higher values for TED, FPD, and 30 min urinary drug excretion as well as less spacer deposition than the rinsed spacer. Thus our results support the superiority of the antistatic properties of the detergent-coated spacer protocol in improving drug deposition into the lung in comparison to water-rinsed ones. This is due to the greater effectiveness of this method to significantly remove surface electrostatic charges and hence improve drug output from the spacer. Previous studies conducted with salbutamol showed a small increase in the output of the drug from both small and large volume spacers after washing the spacer with soapy water without subsequent rinsing with water.\textsuperscript{12,46,47} Previous reports indicated that the type\textsuperscript{12} and the concentration\textsuperscript{13} of detergent used to wash spacers have little influence on the protocol’s effectiveness in reducing electrostatic charges on spacer walls. The exact mechanism of action is not clear yet, but it is assumed that the hydrophilic part of the surface active agent facilitates the conduction of surface charges away from spacer walls. In patients with poor inhalation technique who use small volume spacers, such as what we used in this study, there is an increased risk of frictional contact during inhalation. In this scenario, minimizing electrostatic charge on the spacer walls is of great importance. Studies of the delivery of salbutamol into the lung through aerosols clearly indicated that salbutamol delivery was negatively affected by delayed inhalation and positively affected by washing the spacer with detergent.\textsuperscript{13,44} This further illustrates the electrostatic charge potential as a crucial player in determining aerosol drug delivery from a pMDI/spacer combination. However, it is still unknown whether these handling differences have any clinical importance. As shown in Table 3, the overall mean ratio of the 30-min and 24-h urinary BDP excretion values for the Qvar Easi-Breathe versus the Clenil pMDI were 242.5% (90% CI 212.5–276.8) and 196% (90% CI 171.8–223.7), respectively. This is consistent with our previous urinary pharmacokinetic study of BDP,\textsuperscript{15} where we reported that the overall mean ratios (90% CI) between the Qvar Easi-Breathe and the Clenil pMDI, with respect to the nominal dose for the 30-min and 24-h urinary excretion were 231.4% (90% CI 209.6–255.7) and 204.6 (90% CI 189.6–220.6), respectively. This important finding is in agreement with previous studies that also reported an approximate 2–2.5-fold greater potency of Qvar HFA-BDP compared to the same dose of other CFC-BDP MDIs.\textsuperscript{14,18–20} Observations from this study further indicate good \textit{in vitro}/\textit{in vivo} correlations in agreement with previous suggestions.\textsuperscript{48–52} These results indicate that the in vitro FPD and the TED parameters are the most decisive in predicting the in vivo urinary drug excretion at 30 min and the 24 h, respectively. Although our method cannot differentiate between drug distributions into different parts of the lungs, the total deposition is more closely correlated to clinical outcomes than regional deposition.\textsuperscript{53} Indeed, the future of better respiratory disease control will be more focused on improving drug delivery methods to the lung rather than targeting the introduction of new inhaled therapies. Despite the similar appearance of pMDI designs, many variations in particle size, spacer size, and washing methods have the potential to influence drug delivery. It is clear that optimizing inhalation therapy use not only would improve patient’s therapeutic outcomes but also would lead to more cost-effective health care. As previously published and further supported by this study, the finer details of adequate handling of spacers can maximize drug delivery, improve asthma therapeutic responses, and reduce treatment costs. Therefore, determining the exact handling of various inhalers and spacers should significantly improve asthma management. It is inappropriate to combine any formulation with any spacer device just because it fits the mouthpiece adapter without first considering the aerosol characteristics. Each asthma pMDI formulation/spacer combination is unique and needs to be fully evaluated, even if it contains the same drug, to guide optimal device selection. Further, considering the low therapeutic index and the high cost of ICS, it is safer and more cost-effective to optimize drug delivery to the respiratory tract. **Limitations** This study provides valuable insights on different factors that affect pulmonary drug deposition when using inhaler devices, such as drug formulation, particle size, spacer size, and the method of handling spacers. In this small study, however, we only included 12 healthy subjects; further studies are needed. Research with healthy volunteers is designed to develop new knowledge; to assure direct benefit to patients, this study should be repeated in subjects with asthma. **Conclusion** The in vivo and in vitro results of this study indicate that substantial differences in inhalation devices, such as drug particle size, impact of spacer use, and electrostatic charge presence, greatly influence drug deposition in various regions of the respiratory tract even when using different formulations of the same drug with the same spacer. Indeed, even with formulations rich in extra-fine particles such as that with the Qvar Easi-Breathe, the use of the more convenient small volume spacer was still beneficial in decreasing total systemic ICS delivery without affecting lung deposition. The Clenil pMDI, however, with its larger particle size, had lower total lung deposition with the small volume of the spacer. There is no general rule for which spacer best fits a given inhaler, and each pMDI/spacer combination needs to be fully evaluated for ideal device selection, even if it contains the same drug. **REFERENCES** 1. BTS/SIGN. (2018) British Guideline on the Management of Asthma. A National Clinical Guideline. Available at: www.sign.ac.uk and www.brit-thoracic.org.uk. Accessed January 14, 2019. 2. Lavorini F. The challenge of delivering therapeutic aerosols to asthma patients. ISRN Allergy 2013;2013:102418. 3. Nikander K, Nicholls C, Denyer J, Pritchard J. The evolution of spacers and valved holding chambers. J Aerosol Med Pulm Drug Deliv 2014;27(1):S4–S23. 4. Terzano C, Mannino F. Aerosol characterization of three corticosteroid metered dose inhalers with Volumatic holding chambers and metered dose inhalers alone at two inspiratory flow rates. J Aerosol Med 1999;12(4):249-254. 5. Newman SP. Spacer devices for metered dose inhalers. Clin Pharmacokinet 2004;43(6):349-360. 6. Raissy HH, Kelly HW, Harkins M, Szefler SJ. Inhaled corticosteroids in lung diseases. Am J Respir Crit Care Med 2013;187(8):798-803. 7. Chaplin S, Head S. Clenil Modulite, a CFC-free MDI with no adjustment on switching. Prescriber 2007;18(13):43-46. 8. Leach CL, Davidson PJ, Hasselquist BE, Boudreau RJ. Lung deposition of glydofluoroalkane-134a beclomethasone is greater than that of chlorofluorocarbon fluticasone and chlorofluorocarbon beclomethasone: a cross over study in healthy volunteers. Chest 2002;122(2):510-516. 9. Smyth HDC, Beck VP, Williams D, Hickey AJ. The influence of formulation and spacer device on the in vitro performance of solution chlorofluorocarbon-free propellant driven metered dose inhaler. AAPS Pharm Sci Tech 2004;5(1):32. 10. Barry PW, O’Callaghan C. The optimum length and width for a spacer device. Pharm Pharmacol Comm 2000;6(1):1-5. 11. Tena AF, Clara PC. Deposition of inhaled particles in the lungs. Arch Bronconeumol 2012;48(7):240-246. 12. Wildhaber JH, Janssens HM, Piérart F, Dore ND, Devadason SG, LeSouëf PN. High-percentage lung delivery in children from detergent-treated spacers. Pediatr Pulmonol 2000;29(5):389-393. 13. Piérart F, Wildhaber JH, Vrancken I, Devadason SG, Le Souëf PN. Washing plastic spacers on household detergent reduces electrostatic charge and greatly improves delivery. Eur Respir J 1999;13(3): 673-678. 14. Araujo FB, Amorim Correa R, Pereira LF, Silveira CD, Mancuso EV, Rezende NV. Spirometry with bronchodilator test effect that the use of large-volume spacers with antistatic treatment has on test response. J Bras Pneumol 2011;37(6):752-758. 15. Said ASA, Harding L, Chrystyn H. Urinary pharmacokinetic methodology to determine the relative lung bioavailability of inhaled beclomethasone dipropionate. Br J Clin Pharmacol 2012;74(3):456-464. 16. British Pharmacopoeia. Preparations for inhalation. Aerodynamic assessment of fine particles-fine particle dose and particle size distribution. In: British Pharmacopoeia. (Ed, Stationery Office) London; 2005:4:A277-A290. 17. Beeh KM, Beier J, Donohue JF. Clinical trial design in chronic obstructive pulmonary disease: current perspectives and consid18. Hindle M, Newton DA, Chrystyn H. Investigations of an optimal inhaler technique with the use of urinary salbutamol excretion as a measure of relative bioavailability to the lung. Thorax 1993;48(6):607-610. 19. Menzies D, Nair A, Hopkinson P, McFarlane L, Lipworth J. Differential anti-inflammatory effects of large and small particle size inhaled corticosteroid in asthma. Allergy 2007;65:661-667. 20. Piccinno A, Poli G, Monro R, Goethals F, Nollevaux F, Acerbi D. Extrafine beclomethasone dipropionate and formoterol in single and separate inhalers. Clinic Pharmacol Biopharm 2012;1:102. 21. Rogers SG, Anderson R, Main C, Thompson-Coon J, Hartwell D, Liu Z, et al. Systematic review and economic analysis of the comparative effectiveness of different inhaled corticosteroids and their usage with long-acting beta2 agonists for the treatment of chronic asthma in adults and children aged 12 years and over. Health Technol Assess 2008;12:19. 22. Battaglia S, Cardillo I, Lavorini F, Spatafora M, Scichilone N. Erratum to: safety consideration of inhaled corticosteroids in the elderly. Drug Aging 2015;32(12):1067-1076. 23. Ruzycki CA, Golshahi L, Vehring R, Finlay WH. Comparison of in vitro deposition of pharmaceutical aerosols in an idealized child throat with in vivo deposition in the upper respiratory tract of children. Pharm Res 2015;36(5):1525-1535. 24. Yazdani A, Normandie M, Yousef M, Saidi MS, Ahmadi G. Transport and deposition of pharmaceutical particles in three commercial spacer—MDI combinations. Comput Biol Med 2015;4:145-155. 25. Reznik M, Silver EJ, Cao Y. Evaluation of MDI-spacer utilization and technique in caregivers of urban minority children with persistent asthma. J Asthma 2014;51(2):149-154. 26. Levy ML, Dekhuijzen PNR, Barnes PJ, Broeders M, Corrigan CJ, Chawes BL, et al. Inhaler technique: facts and fantasies. A view from the Aerosol Drug Management Improvement Team (ADMIT). NPJ Prim Care Respir Med 2016;26:16017. 27. Broeders ME, Sanchis J, Levy ML, Crompton GK, Dekhuijzen PN. The ADMIT series—issues in inhalation therapy. 2. Improving technique and clinical effectiveness. Prim Care Respir J 2009;18(2):76-82. 28. Eltezazi T, Davies MJ, Seton L, Morgan MN, Ross S, Martin GD, Hutchings IM. Optimizing the primary particle size distributions of pressurized metered dose inhalers by using inkjet spray drying for targeting desired regions of the lungs. Drug Dev Ind Pharm 2015;41(2):279-91. 29. Lavorini F, Fontana GA, Usmani OS. New inhaler devices: the good, the bad and the ugly. Respiration 2014;88(1):3-15. 30. Mitchell JP, Suggett J, Nagel M. Clinically relevant in vitro testing of orally inhaled products—bridging the gap between the lab and the patient. AAPS PharmSciTech. 2016;17(4):787-804. 31. Wilkes W, Fink, J, Dhand R. Selecting an accessory device with a metered-dose inhaler: variable influence of accessory devices on fine particle dose, throat deposition, and drug delivery with asynchronous actuation from a metered-dose inhaler. J Aerosol Med 2001;14:351-360. 32. Sanchis J, Corrigan C, Levy ML, Viejo JL. Inhaler devices—from theory to practice. Respir Med 2013;107(4):495-502. 33. Berger WE, Bensch GW, Weinstein SF, Skoner DP, Prenner BM, Shekar T, et al. Bronchodilation with mometasone furoate/formoterol fumarate administered by metered-dose inhaler with and without a spacer in children with persistent asthma. Pediatr Pulm 2014;49:441-450. 34. Gachelin E, Vecellio L, Dubus JC. Critical evaluation of inhalation spacer devices available in France. Rev Mal Respir 2014;32(7):672-681. 35. Jat KR, Singhal KK, Guglani V. Autohaler vs. metered-dose inhaler with spacer in children with asthma. Pediatr Allerg Immun 2016;27(2):217-220. 36. Hardy JG, Jasuja AK, Frier M, Perkins AC. A small volume spacer for use with breath operated pressurized metered dose inhaler. Int J Pharm 1996;142(1):129-133. 37. Richards J, Hirst P, Pitcarin G, Mahashabde S, Arbramowitz W, Nolting A, Newman SP. Deposition and pharmacokinetics of flunisolide delivered from pressurized inhalers containing non-CFC and CFC propellants. J Aerosol Med 2001;14:197-208. 38. Ditcham W, Murdzoska I, Zhang G, Roller C, Hollen D, Nikander K, Devadson SG. Lung deposition of 99m Tc-Radiolabelled albuterol delivered through a pressurized metered dose inhaler and spacer with face mask or mouthpiece in children with asthma. J Aerosol Med Pulm Drug Deliv 2014;27(1):S63-S75. 39. Nikander K, Nicholls C, Denyer J, Pritchard J. The evolution of spacers and valved holding chambers. J Aerosol Med Pulm Drug Deliv 2014;27(1):S4-S23. 40. Sanders M, Bruin R. A rationale for going back to the future: use of disposable spacers for pressurised metered dose inhalers. Pulm Med 2015;2015:176194. 41. Bisgaard H. A metal aerosol holding chamber devised for young children with asthma. Eur Res J 1995;8(5):856-860. 42. Berg E, Madsen J, Bisgaard H. In vitro performance of three combinations of spacers and pressurized metered dose inhalers for treatment in children. Eur Resp J 1998;12(2):472-476. 43. Dewsbury NJ, Kenyon CJ, Newman SP. The effect of handling techniques on electrostatic charge on spacer devices: a correlation with in vitro particle size analysis. Int J Pharm 1996;137(2):261-264. 44. Mitchell JP, Coppolo DP, Nagel MW. Electrostatics and inhaled medications: influence on delivery via pressurized metered-dose inhalers and add-on devices. Respir Care 2007;52(3):283-300. 45. Rau JL. Practical problems with aerosol therapy in COPD. Respir Care 2006;51(2):158-172. 46. Newman SP. Principles of metered dose inhaler design. Respir Care 2005;50(9):1177-1190. 47. Lavorini F, Fontana GA. Targeting drugs to the airways: the role of spacer devices. Expert Opin Drug Deliv 2009;6(1):91-102. 48. Olsson B, Borgstrom L, Lundback H, Svensson M. Validation of a general in vitro approach for prediction of total lung deposition in healthy adults for pharmaceutical inhalation products. J Aerosol Med Pulm Drug Deliv 2013;26(6):355-369. 49. Ruzycki CA, Golshahi L, Vehring R, Finlay WH. Comparison of in vitro deposition pharmaceutical aerosols in an idealized child throat with in vivo deposition in the upper respiratory tract of children. Pharm Res 2014;31(6):1525-1535. 50. Mazhar SH, Chrystyn H. Salbutamol relative lung and systemic bioavailability of large and small spacers. J Pharm Pharmacol 2008;60(12):1609-1613. 51. Nahar K, Gupta N, Gauvin R, Absar S, Patel B, Khademhosseini A, Ahsan F. In vitro, in vivo and ex vivo models for studying particle deposition of inhaled pharmaceuticals. Eur J Pharm Sci 2013;49(5):805-815. 52. Newman SP, Chan HK. In vitro/in vivo comparisons in pulmonary drug delivery. J Aerosol Med Pulm Drug Deliv 2008;21(1):77-84. 53. Chrystyn H. Is total particle dose more important than particle distribution? Respir Med 1997;91(1):17-19.
Genetic Determination of Colles’ Fracture and Differential Bone Mass in Women With and Without Colles’ Fracture HONG-WEN DENG,1,2 WEI-MIN CHEN,1,2 SUSAN RECKER,1 MARY RUTH STEGMAN,1 JIN-LONG LI,1,2 K. MICHAEL DAVIES,1 YAN ZHOU,1,2 HONGYI DENG,1 ROBERT HEANEY,1 and ROBERT R. RECKER1 ABSTRACT Osteoporotic fractures (OFs) are a major public health problem. Direct evidence of the importance and, particularly, the magnitude of genetic determination of OF per se is essentially nonexistent. Colles’ fractures (CFs) are a common type of OF. In a metropolitan white female population in the midwestern United States, we found significant genetic determination of CF. The prevalence ($K$) of CF is, respectively, 11.8% ($\pm SE \ 0.7\%$) in 2471 proband women aged 65.55 years (0.21), 4.4% (0.3%) in 3803 sisters of the probands, and 14.6% (0.7%) in their mothers. The recurrence risk ($K_0$), the probability that a woman will suffer CF if her mother has suffered CF is 0.155 (0.017). The recurrence risk ($K_s$), the probability that a sister of a proband woman will suffer CF given that her proband sister has suffered CF is 0.084 (0.012). The relative risk $\lambda$ (the ratio of the recurrence risk to $K$), which measures the degree of genetic determination of complex diseases such as CF, is 1.312 (0.145; $\lambda_0$) for a woman with an affected mother and 1.885 (0.276; $\lambda_s$) for a woman with an affected sister. A $\lambda$-value significantly greater than 1.0 indicates genetic determination of CF. The terms $\lambda_0$ and $\lambda_s$ are related to the genetic variances of CF. These parameters translate into a significant and moderately high heritability (0.254 [0.118]) for CF. These parameters were estimated by a maximum likelihood method that we developed, which provides a general tool for characterizing genetic determination of complex diseases. In addition, we found that women without CF had significantly higher bone mass (adjusted for important covariates such as age, weight, etc.) than women with CF. (J Bone Miner Res 2000;15:1243–1252) Key words: Colles’ fracture, heritability, osteoporotic fracture, genetic determination, relative risk, recurrence risk INTRODUCTION More than 1.3 million osteoporotic fractures (OFs) occur each year, with an estimated direct cost of $13.8 billion\textsuperscript{1)}$ in the United States alone. One central objective of bone biology is the investigation into all the important intrinsic and extrinsic factors that underlie OFs, with the ultimate goal to intervene effectively and reduce the risk and incidence of OFs. The majority of the studies\textsuperscript{2–6)} have concentrated on extrinsic and nongenetic environmental factors. Extensive studies\textsuperscript{7–12)} have been conducted to define the relative importance of genetic factors in determining some risk factors underlying OFs. These studies have unambiguously revealed that $\sim 50–80\%$ of bone mineral density (BMD), a major risk factor for OF,\textsuperscript{13–15)} is under genetic control. The importance of genetic determination of other identified major risk factors (such as bone loss rates and bone size) also is suggested.\textsuperscript{16–20)} However, direct evidence of the genetic determination of OFs is essentially nonexistent. Particularly, the magnitude of the genetic determination of OFs per se is unknown. \textsuperscript{1}Osteoporosis Research Center, Creighton University, Omaha, Nebraska, U.S.A. \textsuperscript{2}Department of Biomedical Sciences, Creighton University, Omaha, Nebraska, U.S.A. Extensive molecular genetic studies have been launched to search for genes underlying BMD variation. The results so far have been inconsistent, and consensus needs to be developed by further studies and by analyses of previous extensive results. Molecular genetic studies of other major risk factors (such as bone loss and bone size) have been scarce, even if important genetic determination has been revealed for them. Direct molecular genetic studies of the OF per se are even more rare. Particularly, systematic, whole genome searches for genes important for OFs per se essentially do not exist. However, searches for genes underlying the risk of OFs per se are essential (see Discussion). OF occurs at different skeletal sites for which the pathogenesis and risk factors (including their underlying genetic loci if any) and/or their relative importance may not all be the same. Almost all fractures of the distal forearm are the Colles’ type. For this first investigation to characterize genetic determination of OF, we choose to study Colles’ fracture (CF), for the following reasons: 1. CF is one of the most prevalent OF. CF generally is symptomatic and nearly always requires medical treatment. Therefore, confirmation of CF is relatively easy. CF accounts for a significant proportion of outpatient health resource utilization for OF treatment. However, CF per se normally does not lead directly to markedly increased mortality and permanent morbidity, rendering it relatively easy to recruit study subjects with CF. 2. CF is predictive of underlying osteoporosis and subsequent OF. A CF is indicative of an overall 50% increase in the risk of a subsequent hip fracture. Women with CF have lower BMD at several skeletal sites, including spine, hip, and radius and have higher bone turnover rate. 3. CFs in adults occur at relatively young ages, starting at approximately age 40 years. Many of the study subjects have live parents and siblings available. Information from these relatives is essential for many genetic studies. To initiate extensive searches for genes underlying OF risk through the study of OF per se, direct evidence for the importance of the genetic determination of OF per se must first be provided. Especially, the genetic parameters that determine the likelihood of success of hunting for OF genes must be estimated. In this study, we will | No. of probands | No. of sisters of the probands | No. of mothers of the probands | |-----------------|-------------------------------|--------------------------------| | | Affected | Unaffected | Total | Affected | Unaffected | Total | Total | | 293 (affected) | 32 | 341 | 373 | 56 | 237 | 293 | 959 | | 2178 (unaffected) | 137 | 3293 | 3430 | 304 | 1874 | 2178 | 7786 | | 2471 (total) | 169 | 3634 | 3803 | 360 | 2111 | 2471 | 8745 | MATERIALS AND METHODS Subjects and measurement The data for this study were obtained during the preparatory part of an ongoing research involving a whole genome scan to detect genomic regions underlying the risk of CF, which was approved by the Creighton University Institutional Review Board. All study subjects signed informed-consent documents before entering. The nuclear families of sisters and their mothers were ascertained. The probands came from a database containing all study subjects who have ever been participants of various bone studies or patients at the Osteoporosis Research Center of Creighton University. We mailed questionnaires to 3696 women from this database who were at least 40 years of age as of January 10, 1999 and inquired as to their CF status, the number of living sisters and their CF status, and finally the CF fracture status of their mothers. The mean ($\pm$ SE) of the ages of the proband women was 65.55 (0.21). Throughout, unless otherwise specified, the number within parenthesis after an estimate is the associated SE. We received 2471 eligible responses as of April 1, 1999. The basic data are the information on the CF of the 2471 probands, 3803 sisters, and 2471 mothers of these probands. The total sample size is 8745. The CF status of these subjects is summarized in Table 1. Data from others Table 2. Estimates of Prevalence, Recurrence, and Relative Risks | | Prevalence | Recurrence risk | Relative risk | |----------------|------------|-----------------|---------------| | $K_1$ | $K_2$ | $K_3$ | $K_s$ | $K_D$ | $\lambda_s$ | $\lambda_D$ | | 0.118 (0.007) | 0.044 (0.003) | 0.147 (0.007) | 0.084 (0.012) | 0.155 (0.017) | 1.885 (0.276) | 1.312 (0.145) | The $K_1$, $K_2$, and $K_3$ are, respectively, the prevalence of CF in the probands, sisters of the probands, and mothers of the probands. The $K_s$ and $K_D$ are the recurrence risks of CF for sister-sister and daughter-mother pairs. The $\lambda_s$ and $\lambda_D$ are the relative risks of CF for sister-sister and daughter-mother pairs. The detailed definition of these parameters can be found in the Definition and statistical analysis subsection in the text. Because the incidence of CF is age dependent,\textsuperscript{32} the prevalence ($K$) of CF also should highly depend on the age groups of the subjects under study, as will be supported by our data here on the differential $K$'s in the groups of the mothers and the daughters. In addition, the probands are from the database created for the subjects who have been the patients or participants in studies conducted at our center. These probands are more likely to have osteoporosis or osteopenia and are more prone to OF than the general population. To account for potential difference in risks, we denote the different prevalence of CF in the probands, the sisters of the probands, and the mothers, respectively, as $K_1$, $K_2$, and $K_3$. That is, \[ K_1 = \frac{\text{(the number of affected probands)}}{\text{(the total number of probands)}}; \] \[ K_2 = \frac{\text{(the number of affected sisters of the probands)}}{\text{(the total number of sisters of the probands)}}; \] \[ K_3 = \frac{\text{(the number of affected mothers of the probands)}}{\text{(the total number of mothers of the probands)}}. \] The standard errors of $K_1$, $K_2$, and $K_3$ can be computed by the method of maximum likelihood, the principles of which will be outlined for a more complex situation for the estimation of $\lambda_s$ and $\lambda_D$ (Appendix 1). Although age data for the sisters and mothers of the probands are not available, on average as groups, there is no reason for the probands to differ in age from their sisters and mothers will have older ages than the daughters. However, $K$'s are likely to be different in probands and their sisters as reasoned earlier and will be verified later. The distinction of $K$'s in the probands and their sisters accounts for differential risks of CF in these two groups and thus accounts for the ascertainment through probands in the estimation developed in Appendix 1. The distinction of $K$'s in mothers and their daughters accounts for the differential risks in mothers and daughters simply because of the age difference and thus coarsely accounts for age dependence of the risks of CF in this study (also see Discussion). For this study, let us define two recurrence risks as $K_s = \Pr(\text{sister} = 1|\text{proband} = 1)$, $K_D = \Pr(\text{daughter} = 1|\text{mother} = 1)$, and define the two relative risks as $\lambda_s = K_s/K_2$ and $\lambda_D = K_0/K_2$. In words, $K_s$ is the probability that a sister of a proband will be affected with CF given that the proband is affected. The term $K_D$ is the probability that a daughter will be affected conditional on her mother being affected. The term $\lambda_s$ is the increase in risk of CF for a sister... who has an affected sister compared with the prevalence $K_2$ in the sister population. The term $\lambda_0$ is the increase in risk of a daughter who has an affected mother relative to the population prevalence in the daughters. The recurrence risks and the relative risks and their SEs can be estimated by the maximum likelihood estimation developed in Appendix 1, which should be of general use for characterizing genetic determination of complex diseases. Although recurrence ($K_R$) and relative ($\lambda_R$) risks are direct measures of the degree of genetic determination of complex diseases,\textsuperscript{(42)} genetic variances and heritability ($h^2$) are more familiar indices of genetic determination for continuous quantitative traits such as BMD. In addition, complex diseases may be modeled by continuously distributed quantitative traits (liabilities)\textsuperscript{(42)} as threshold traits. Therefore, to help to see the relationship between prevalences ($K$'s) in different groups $K_R$, $\lambda_R$, and the genetic variances and $h^2$ we derived the relationship among them and developed a maximum likelihood estimation of additive ($\sigma_A^2$) and dominant ($\sigma_D^2$) genetic variances and $h^2$ (Appendix 2) and estimated $\sigma_A^2$, $\sigma_D^2$, and $h^2$ (Table 3). To compare bone mass in women with and without CF, we conducted multiple regression with bone mass as a dependent variable and age and weight as independent variables. The results are summarized in Table 4. We then used these multiple regression results to adjust bone mass for age and weight, which ensures that the differences of bone mass between women with and without CF will not be confounded by important covariates of age and weight.\textsuperscript{(21)} The variances of adjusted bone mass data in women with and without CF were compared by F tests for homogeneity. Then the differences of the means of adjusted bone mass data were tested by appropriate $t$-tests. The results are summarized in Table 5. The differences of the standard Z scores at spine and femoral neck between groups of women with and without CF also were tested. The Z score denotes BMD in units of SDs above or below the mean of a healthy ethnic-, age-, and gender-matched referent population. **RESULTS** The prevalences of CF are, respectively, 11.8% ($\pm$SE 0.7%) in the probands ($K_1$), 4.4% (0.3%) in sisters of the probands ($K_2$), and 14.6% (0.7%) in mothers of the probands ($K_3$). The higher prevalence of CF in the mothers reflects the age dependence of the incidence of CF. With increasing age, the incidence of CF increases dramatically until a plateau is reached at $\sim$60 years of age.\textsuperscript{(27)} The higher prevalence of CF in the probands ($K_1$) than in their sisters ($K_2$) probably reflects the fact that the probands have been the participants of various bone studies to prevent osteoporosis or have been patients at our center. Therefore, $K_1$ may be elevated relative to the same age group in the study population and $K_3$ may reflect more closely the prevalence ($K$) of the same age group in the population. Therefore, in the absence of the data from a random sample from the study population, $K_2$ is employed to approximate $K$ for women of $\sim$65.6 years of age (0.21). However, it should be noted that $K_2$ is still expected to be higher than $K$ for the same age group in the study population, simply because of the relatedness of sisters to a selected group (probands) with higher risks of CF. Thus, $K_2$ can be viewed as an upper boundary of the estimate of $K$. Importantly, it should be pointed out that analytically by the definitions of $\lambda_R$, $\lambda_s$, and $\lambda_0$ (in the subsection of Definition and statistical analyses in the Materials and Methods section and in Appendix 1), using $K_1$ or $K_2$ to substitute for $K$ in the estimation will result in downward bias of the estimation of the true $\lambda_s$ and $\lambda_0$ values. Therefore, the estimated genetic parameters given below should be viewed as conservative estimates of the lower limits of the true values. The recurrence risk (the probability of having CF) for a woman is 0.084 (0.012) given that she has a sister who has had CF ($K_s$) and 0.155 (0.017) if her mother has had CF ($K_o$). The relative risk of $\lambda_s$ is 1.885 (0.276) and $\lambda_0$ is 1.312 (0.145), both significantly greater than 1.0, indicating significant genetic determination in the occurrence of CF. Roughly speaking, a $\lambda_s$ value of 1.885 indicates that the risk of CF for a woman with an affected sister is more than twice that of a random woman of similar age in the population. A $\lambda_0$ value of 1.312 indicates that the risk of CF for a woman with an affected mother is about one and one-half times that of a random woman of similar age in the population. When converted to the familiar index of genetic determination for continuous quantitative traits, the additive genetic variance ($\sigma_A^2$) of CF is 0.0108 (0.0025) and the dominant genetic variance ($\sigma_D^2$) is $-0.0029$ (0.0058). Thus, the $\sigma_A^2$ is significant and $\sigma_D^2$ is not statistically different from zero. Therefore, the genetic variance of CF is largely the heritable component $\sigma_A^2$. The narrow-sense heritability ($h^2$) of CF is 0.254 (0.118), which indicates that $\sim$25% of the variation of the occurrence of CF is determined genetically. Age and weight had highly significant effects on BMD (Table 4), as is well recognized. Importantly, BMD of spine, femoral neck and wrist, and the total body bone mass were all significantly higher in women without CF than women with CF. The same conclusion held for the Z scores at spine and femoral neck. All the tests remained significant even after the multiple comparison was accounted for. **DISCUSSION** To our knowledge, this study is the first that provides direct evidence for the magnitude of the genetic determination of CF—a common type of OF. Our results unambiguously indicate that there is a strong and moderately high degree of genetic determination of CF in white women. In addition, women without CF had significantly higher bone Table 4. Results of Multiple Regression Analyses | | Intercept | Age | Weight (kg) | Adj. $R^2$ | |------------------------|-------------|-------------|-------------|------------| | Spine BMD (g/cm$^2$) | 0.8593 | −0.0038 | 0.0042 | 0.22 | | [2417] | | | | | | Femoral neck BMD (g/cm$^2$) | 0.7240 | −0.0044 | 0.0032 | 0.34 | | [1302] | | | | | | Distal radius BMD (g/cm$^2$) | 0.7880 | −0.0057 | 0.0026 | 0.42 | | [552] | | | | | | Total body BMC (g) | 1783.1 | −18.4 | 19.0 | 0.56 | | [392] | | | | | The numbers reported in this table are the partial regression coefficients, which are all significant at $\alpha = 0.001$ level. The numbers within brackets are the sample sizes in multiple regression analyses. Table 5. Bone Mass in Women With and Without CF | | Spine BMD (g/cm$^2$) | Total body BMC (g) | Distal radius BMD (g/cm$^2$) | Femoral neck DMD (g/cm$^2$) | $Z_{BMD}$ spine | $Z_{BMD}$ femur neck | |------------------------|----------------------|--------------------|------------------------------|-------------------------------|-----------------|---------------------| | Women with CF | 0.82 (0.14) [233] | 1600.6 (339.7) [50]| 0.52 (0.08) [67] | 0.59 (0.10) [107] | −0.12 (1.26) [233]| −0.84 (0.96) [107] | | Women without CF | 0.91 (0.17) [1528] | 1938.0 (463.0) [272]| 0.61 (0.12) [379] | 0.67 (0.12) [904] | 0.41 (1.49) [1528] | −0.41 (1.11) [904] | | $p$ | 1.43E−11 | 0.0008 | 2.47E−05 | 2.26E−05 | 8.45E−09 | 4.28E−05 | The numbers given are means, associated SDs (numbers within parentheses), and the sample sizes (numbers within brackets) for computing the mean and SDs. The $p$ is the $p$ value associated with respective $t$-tests for the differences between women with and without CF. Mass than those with CF. The approximation of $K$ by $K_2$ ($\sim 4.4\%$) for CF in women $\sim 65$ years of age in our study population, although upwardly biased, is within the range of those estimates ($\sim 2–18\%$) obtained in different populations.\textsuperscript{28–34,44–46} Population variation in the prevalence of OF has been well recognized before (e.g., see Refs. 28–34). Our direct evidence of genetic determination of OF is consistent with several lines of earlier indirect evidence. First, there is racial difference in the incidence of OF.\textsuperscript{27,47,48} This racial difference is shown to be at least partially related to the vitamin receptor D genotypes.\textsuperscript{49} Second, within populations, COL1A1 gene polymorphisms are shown to be markers of vertebral fracture risk,\textsuperscript{50} with the $Ss$ and $ss$ genotypes incurring a relative risk of 2.97. Third, family history is a strong predictor of risk of OF.\textsuperscript{51–53} Particularly, the genetic determination of CF is consistent with the recent results suggesting several genomic regions underlying forearm BMD variation,\textsuperscript{54} an important risk factor for CF.\textsuperscript{37} The estimates for $K_0$, $K_s$, $\lambda_0$, and $\lambda_s$ have direct practical application for genetic counseling on the risks of CF for women who have sisters or mothers with CF. For example, the values of $\lambda_0$ (1.312) and $\lambda_s$ (1.885) clearly indicate that a woman with an affected sister or mother is predisposed genetically to an elevated risk of CF and should take preventive intervention for CF. Prevention of OF is one central objective of bone studies. Genetic studies of bone largely have been confined to BMD. This is because BMD is an important risk factor for OF,\textsuperscript{13–15} and BMD is relatively easy to measure.\textsuperscript{55} However, genetics studies of OF are essential for the following reasons: (1) BMD is not the only important risk factor for OF. Many other identified and/or unidentified intrinsic factors also are important.\textsuperscript{51–53,56–58} Many of these are under strong genetic control.\textsuperscript{16–20} Importantly, genes underlying different risk factors are not all the same as reflected by the low genetic correlation between them.\textsuperscript{11,58} In addition, many important risk factors may not yet have been identified, because no combination of the known risk factors can predict lifetime OF risk with high confidence.\textsuperscript{51–53} (2) Measurements of BMD by current techniques may not be precise. For example, BMD often is measured by DXA, a projectional technique based on the two-dimensional projection of a three-dimensional structure. The values are expressed as bone content per unit area (g/cm$^2$) of the projected image of the region of interest (ROI), which is only an approximation of the volumetric density. Correction factors for this are subject to error,\textsuperscript{59–63} because there is no closed formula that defines the size of the vertebrae or the femur. Importantly, DXA values are influenced by variation in the composition of soft tissues in the beam path of the ROI. Inhomogeneous fat distribution in soft tissues consisting of only 2 cm variation in the fat layer around the bone will influence DXA measurements by as much as 10%. (64) (3) Because of pleiotropic effects (i.e., the same gene controls multiple risk factors) that are common for complex traits, (42) alleles conferring high BMD may adversely affect other important aspects of bone and thus confer lower resistance to OF. It has been shown that a genetically homogeneous inbred mouse strain has higher bone mass but smaller bone size and is less sensitive in adapting to mechanical loading to increase stiffness of bone strength when compared with another inbred mouse strain. (65) Similarly, low BMD but more highly organized collagen fibrils actually may enhance bone mechanical strength and thus result in a lower risk for OF. (66) Therefore, in addition to our effort to search for genes underlying individual risk factors such as bone mass, extensive efforts should be initiated to search for genes underlying OF through studying OF per se and to investigate the relevance and the importance to OF of the genes revealed for individual risk factors. Searching for genes underlying OF per se will assure that the genes discovered are important for the susceptibility to OF. The moderately high genetic determination of CF indicates that searching for genes underlying CF is likely to be fruitful and, certainly, such effort should be warranted. The parameters involved in determining the likelihood of success of gene search are relative risks ($\lambda$) for dichotomous complex diseases and $h^2$ for continuous quantitative traits. (42, 67–70) Generally, for quantitative trait (such as BMD), discovering a genetic locus responsible for more than 15% of phenotypic variation (i.e., the $h^2$ due to this locus is greater than 0.15) is well within our current technical and analytical capabilities. (68–70) For complex diseases (such as OF), a locus that confers a relative risk of $\lambda_s > 1.6$ also is well accomplishable. (67, 71) To provide an intuitive comparison, we converted the standard measures (relative and recurrence risks) of genetic determination of complex dichotomous diseases to the more familiar index ($h^2$) for continuous quantitative traits. The relative risk is 1.311 (0.145; $\lambda_0$) for a woman with an affected mother and 1.885 (0.276; $\lambda_d$) for a woman with an affected sister, which correspond to an $h^2$ of 0.254 (0.118). Therefore, in light of both types of these measures, the prospect of searching for genes underlying the risk for CF is optimistic. This is especially true given that these estimates are the lower limits of the corresponding true trues (concordance) as indicated in the Results. Of course, the likelihood of success also depends on the genetic determination attributable to individual major genetic loci. However, genetic determination caused by individual major genetic loci will not be known before extensive and systematic molecular genetic studies are performed. Except for spine fractures, almost all OFs result from low trauma, that is, a fall. Although we cannot specify exactly how many, it is most likely that the majority of our CF cases are caused by low trauma as suggested by the significant difference of bone mass found between women with CF and those without CF in our sample. Inclusion of CF cases that are caused by accidental high trauma generally will reduce the chance to detect the difference of bone mass between women with and without CF and decrease the magnitude of genetic determination estimated, simply because of the randomness of accidents. Therefore, inadvertent inclusion of CF cases caused by accidental high trauma will render our estimation of genetic determination of CF even more conservatively lower than true values. CF probably has less of a relationship to BMD than other typical OFs at spine and hip and genes for various types of OF may not all be the same. However, consistent with the few earlier studies, (36, 37) our data clearly show that CF is a strong indicator of the underlying low bone mass at all the skeletal sites examined. Thus, systematic molecular genetic studies such as a whole genome scan for genes underlying CF will have a scope broad enough to identify genes for non-BMD as well as BMD factors important in determining OF risk. Searching for genes underlying the risk of CF also should be important for prevention of osteoporosis and other types of OF. This is because CF is predictive of subsequent OF of other types and the underlying osteoporosis. (36–38) It should be noted that the genetic parameters obtained in this study have not been adjusted for many known nongenetic factors. The influence of nongenetic factors on the incidence of CF can be adjusted by employing techniques such as multiple logistic regression. Although the dependence of incidence of CF on age is coarsely accounted for by adjusting for various $K$'s in daughters and mothers, more accurate adjustment is possible by logistic regression if specific ages of most study subjects were known. Adjusting significant nongenetic factors can effectively control for the nongenetic causes in the incidence of CF and thus generally increase the apparent importance of major genes and the likelihood to detect them in genetic studies. (21, 72) Although commonly employed as the parameters to model dichotomous complex traits and to compute statistical power for search of genes underlying complex diseases, the estimation of $K_0$, $K_s$, $\lambda_0$, and $\lambda_s$ has been rare for many disease traits. Particularly, although the definitions of these parameters are simple, their estimation is not trivial in practice with complex family structure. The maximum likelihood method developed here can estimate not only the means but also the variances of the $K_0$, $K_s$, $\lambda_0$, $\lambda_s$, $\sigma_A^2$, $\sigma_D^2$, and $h^2$ of complex diseases. The method is general and can be applied directly or extended to characterize genetic determination of any complex disease based on nuclear families. **ACKNOWLEDGMENTS** This study was partially supported by a grant from the Health Future Foundation to Creighton University and National Institutes of Health (NIH) grant AR40879. REFERENCES 1. Ray NF, Chan JK, Thamer M, Melton LJ III 1997 Medical expenditures for the treatment of osteoporotic fractures in the United States in 1995: Report from the National Osteoporosis Foundation. J Bone Miner Res 12:24. 2. Kiel DP, Zhang Y, Hannan MT, Anderson JJ, Baron JA, Felson DT 1996 The effect of smoking at different life stages on bone mineral density in elderly men and women. Osteoporos Int 6:240–248. 3. Richelson LS, Wahner HW, Melton LJ, Riggs BL 1984 Relative contributions of aging and estrogen deficiency to postmenopausal bone loss. N Engl J Med 311:1273–1276. 4. Heaney RP, Recker RR, Saville PD 1978 Menopausal changes in calcium balance performance. J Lab Clin Med 92:953–963. 5. Huang Z, Himes JH, McGovern PG 1996 Nutrition and subsequent hip fracture risk among a national cohort of white women. Am J Epidemiol 144:124–134. 6. Davee AM, Rosen CJ, Adler RA 1990 Exercise patterns and trabecular bone density in college women. J Bone Miner Res 5:245–250. 7. Krall EA, and Dawson-Hughes B 1993 Heritability and life-style determinants of bone mineral density. J Bone Miner Res 8:1–9. 8. Dequeker J, Nijs J, Verstraeten A, Geusens P, Gevers G 1987 Genetic determinants of bone mineral content at the spine and radius: A twin study. Bone 8:207–209. 9. Slemenda SW, Christian JCC, Williams CJ, Norton JA, Johnston CC Jr 1991 Genetic determinants of bone mass in adult woman: A reevaluation of the twin model and the potential importance of gene interaction on heritability estimates. J Bone Miner Res 6:561–567. 10. Gueguen R, Jouanny P, Guillemin F, Kuntz C, Pourel J, and Siest G 1995 Segregation analysis and variance components analysis of bone mineral density in health families. J Bone Miner Res 10:2017–2022. 11. Deng HW, Stegman MR, Davies MK, Conway T, Recker RR 1999 Genetic determination of peak bone mass of the hip and spine. J Clin Densitometry (in press). 12. Deng H-W, Chen W-M, Conway T, Zhou Y, Davies KM, Stegman M-R, Deng H, and Recker RR Determination of bone mineral density in human pedigrees by genetic and life-style factors at hip and spine. Genet Epidemiol (submitted). 13. Black DM, Cummings SR, Genant HK, Nevitt MC, Palermo L, Browner W 1992 Axial and appendicular bone density predict fractures in older women. J Bone Miner Res 7:633–638. 14. Cummings SR, Black DM, Nevitt MC, Browner W, Cauley J, Ensrud K, Genant HK, Palmero L, Scott J, Vogt TM. 1993 Bone density at various sites for prediction of hip fractures. Lancet 341:72–75. 15. Melton LJ III, Atkinson EJ, O’Fallon WM, Wahner HW, Riggs BL 1993 Long-term fracture prediction by bone mineral assessed at different skeletal sites. J Bone Miner Res 8:1227–1233. 16. Heaney RP, Barger-Lux MJ, Johnson ML, Gong G 1996 Bone dimensional change with age: Interactions of genetic, hormonal, and body size variables. Osteoporos Int 6:163. 17. Zmuda JM, Cauley JA, Danielson ME, Wolf RL, Ferrell RE 1997 Vitamin D receptor gene polymorphisms, bone turnover, and rates of bone loss in older African-American women. J Bone Miner Res 12:1446–1452. 18. Krall EA, Parry P, Lichter JB, Dawson-Hughes B 1995 Vitamin D receptor alleles and rates of bone loss: Influences of years since menopause and calcium intake. J Bone Miner Res 10:978–984. 19. Harris M, Nguyen TV, Howard GM, Kelly PJ, Eisman JA 1998 Genetic and environmental correlation between bone formation and bone mineral density: A twin study. Bone 22:141–145. 20. Kelly PJ, Negyun TV 1994 Genetic influences on type I collagen synthesis and degradation: Further evidence for genetic regulation of bone turnover. J Clin Endocrinol Metab 78:1461–1466. 21. Deng HW, Li J, Li JL, Johnson M, Recker RR 1999 Association of VDR and ER genotypes with bone mass in postmenopausal women: Different conclusions with different analyses. Osteoporos Int 9:499–507. 22. Johnson ML, Gong GD, Kimberling W, Recker SM, Kimmel DB, Recker RR 1997 Linkage of a gene causing high bone mass to human chromosome 11 (11q12–13). Am J Hum Genet 60:1326–1332. 23. Morrison NA, Qi JC, Tokita A, Kelly PJ, Crofts L, Nguyen TV, Sambrook PN, Eisman JA. 1994 Prediction of bone density from vitamin D receptor alleles. Nature 367:284–287. 24. Gong GD, Sterns HS, Cheng SC, Fong N, Mordeson J, Deng HW, Recker RR 1998 On the association of bone mass density and Vitamin-D receptor genotype polymorphisms. Osteoporos Int 9:55–64. 25. Deng HW, Li J, Li JL, Johnson M, Davies M, Recker RR 1998 Change of bone mass in postmenopausal Caucasian women with and without hormone replacement therapy is associated with Vitamin D receptor and estrogen receptor genotypes. Hum Genet 103:576–585. 26. Koller DL, Rodriguez LA, Christian JC, Slemenda CW, Econis MJ, Hui SL, Morin P, Conneally PM, Joslyn G, Curran ME, Peacock M, Johnston CC, Foroud T 1998 Linkage of a QTL contributing to normal variation in bone mineral density to chromosome 11q12–13. 13:1903–1908. 27. Melton LJ III, Thamer M, Ray NF, Chan JK, Chesnut CH III, Einhorn TA, Johnston CC, Raisz LG, Silverman SL, Siris ES. 1997 Fractures attributable to osteoporosis: Report from the national osteoporosis foundation. J Bone Miner Res 12:16–23. 28. Melton LJ III 1995 Epidemiology of fractures. In: Riggs BL, Melton LJ III (eds.) Osteoporosis: Etiology, Diagnosis, and Management, 2nd ed. Lippincott-Raven Publishers, Philadelphia, PA, U.S.A., pp. 225–247. 29. Solgaard S, Petersen VS 1985 Epidemiology of distal radius fractures. Acta Orthop Scand 56:391–393. 30. Cummings SR, Kelsey JL, Nevitt MC, O’Dowd KJ 1985 Epidemiology of osteoporosis and osteoporotic fractures. Epidemiol Rev 7:178–208. 31. Melton LJ, Chrischilles EA, Cooper C, Lane AW, Riggs BL 1992 How many women have osteoporosis. J Bone Miner Res 7:1005–1010. 32. Wasnich RD 1997 Epidemiology of osteoporosis in the United States of America. Osteoporos Int 7(Suppl 3):68–72. 33. Melton LJ III 1993 Epidemiology of Age-Related Fractures, 3rd ed. Wiley-Liss, New York, NY, U.S.A., pp. 17–38. 34. Cummings SR, Black DM, Rubin SM 1989 Lifetime risks of hip, Colles’, or vertebral fracture and coronary heart disease among white postmenopausal women. Arch Intern Med 149:2445–2448. 35. Ray NF, Chan JK, Thamer M, Melton LJ III 1997 Medical expenditures for the treatment of osteoporotic fractures in the United States in 1995: Report from the national osteoporosis foundation. J Bone Miner Res 12:24–35. 36. Owen RA, Melton LJ III, Istrup DM, Johnson KA, Riggs BL 1982 Colles’ fracture and subsequent hip fracture risk. Clin Orthop 171:37–43. 37. Earnshaw SA, Catwe SA, Worley A, Hosking DJ 1998 Colles’ fracture of the wrist as an indicator of underlying osteoporosis in postmenopausal women: A prospective study of bone mineral density and bone turnover rate. Osteoporos Int 8:53–60. 38. Cuddyhy MT, Gabriel SE, Crowson CS, O’Fallon WM, Melton LJ III 1999 Forearm fractures as predictors of subsequent osteoporotic fractures. Osteoporos Int 9:469–475. 39. Nevitt MC, Cummings SR, Browner WS, Seeley DG, Cauley JA, Vogt TM, Black DM 1992 The accuracy of self-report of fractures in elderly women: Evidence from a prospective study. Am J Epidemiol 135:490–499. 40. Bush TL, Miller SR, Golden AL, Hale WE 1989 Self-report and medical report agreement of selected medical conditions in the elderly. Am J Public Health 79:1554–1556. 41. Pagnini HA, Chao A 1993 Accuracy of recall of hip fracture, heart attack, and cancer: A comparison of postal survey data and medical records. Am J Epidemiol 138:101–106. 42. Lynch M, Walsh B 1998 Genetics and Data Analysis of Quantitative Traits. Sinauer, Sunderland, MA, U.S.A. 43. Khoury MJ, Beaty TH, Cohen BH 1993 Fundamentals of Genetic Epidemiology. Oxford University Press, New York, NY, U.S.A. 44. Zieger K 1998 Fractures following accidental falls among the elderly in the county of Aarhus. Ugeskr-Laeger 160:6652–6655. 45. Larsen CF, Lauritsen J 1993 Epidemiology of acute wrist trauma. Int J Epidemiol 22:911–916. 46. Mallmin H, Ljunghall S 1993 Incidence of Colles’ fracture in Uppsala. A prospective study of a quarter-million population. Acta Orthop Scand 63:213–215. 47. Silverman SL, Madison RE 1988 Decreased incidence of hip fracture in Hispanics, Asians, and Blacks: California hospital discharge data. Am J Public Health 78:1482–1483. 48. Ross PD, Norimatsu H, Davis JW, Yano K, Wasnich RD, Fujiwara S, Melton LJ III 1991 A comparison of hip fracture incidence among native Japanese, Japanese Americans, and American Caucasians. Am J Epidemiol 133:801–809. 49. Young RP, Lau EMC, Birjandi Z, Critchley JAJ II, Woo J 1996 Interethnic differences in hip fracture rate and the vitamin D receptor polymorphism. Lancet 348:688–689. 50. Grant SF, Reid DM, Blake G, Herd R, Fogelman I,Ralston SH 1996 Reduced bone density and osteoporosis associated with a polymorphic Sp1 binding site in the collagen type I alpha 1 gene. Nat Genet 14:203–205. 51. Cummings SR, Nevitt MC, Browner WS, Stone K, Fox KM, Ensrud KE, Cauley J, Black DM, Vogt TM, 1995 Risk factors for hip fracture in white women. N Engl J Med 332:767–773. 52. Seeley DG, Kelsey J, Jergas M, Nevitt MC 1996 Predictors of ankle and foot fractures in older women. J Bone Miner Res 11:1347–1355. 53. Torgerson DJ, Campbell MK, Thomas RE, Reid DM 1996 Prediction of perimenopausal fractures by bone mineral density and other risk factors. J Bone Miner Res 11:293–297. 54. Niu T, Chen C, Cordel H, Yang J, Wang B, Wang Z, Fang Z, Schork NJ, Rosen CJ, Xu X 1999 A genome-wide scan for loci linked to forearm bone mineral density. Hum Genet 104:226–233. 55. Kanis JA 1997 Diagnosis of osteoporosis. Osteoporos Int 7(Suppl 3):108–116. 56. Faulkner KG, Cummings SR, Black D, Palermo L, Gluer C-C, Genant HK 1993 Simple measurement of femoral geometry predicts hip fracture: The study of osteoporotic fractures. J Bone Miner Res 8:1211–1217. 57. Kleerekoper M, Villanueva AR, Stanciu J, Rao DS, Parfitt AM 1985 The role of three-dimensional trabecular microstructure in the pathogenesis of vertebral compression fractures. Calcif Tissue Int 37:594–597. 58. Harris M, Nguyen TV, Howard GM, Kelly PJ, Eisman JA 1998 Genetic and environmental correlation between bone formation and bone mineral density: A twin study. Bone 22:141–145. 59. Kröger H, Kotaniemi A, Kröger L, Alhava E 1993 Development of bone mass and bone density of the spine and femoral neck—a prospective study of 65 children and adolescents. Bone Miner 23:171–182. 60. Kröger H, Kotaniemi A, Vainio P, Alhava E 1992 Bone densitometry of the spine and femur in children by dual-energy x-ray absorptiometry. Bone Miner 17:75–85. 61. Katzman DK, Bachrach LK, Carter DR, Marcus R 1991 Clinical and anthropometric correlates of bone mineral acquisition in healthy adolescent girls. J Clin Endocrinol Metab 73:1332–1339. 62. Plotkin H, Núñez M, ML AF, Zanchetta JR 1996 Lumbar spine bone density in Argentine children. Calcif Tissue Int 58:144–149. 63. Moro M, van der Meulen MCH, Kiratli BJ, Marcus R, Bachrach LK, Carter DR 1996 Body mass is the primary determinant of midfemoral bone acquisition during adolescent growth. Bone Miner 19:519–526. 64. Hangartner T 1990 Influence of fat on bone measurements with dual-energy absorptiometry. Bone Miner 9:71–78. 65. Puustjärvi K, Nieminen J, Räsänen T, Hyttinen M, Helminen HJ, Kroger H, Huuskonen J, Alhava E, Kovanen V. 1999 Do more highly organized collagen fibrils increase bone mechanical strength in loss of mineral density after one-year running training? J Bone Miner Res 14:321–329. 66. Risch N, Zhang H 1995 Extreme discordant sib pairs for mapping quantitative trait loci in humans. Science 268:1584–1589. 67. Kruglyak L, Lander E 1995 Complete multipoint sib-analysis of qualitative and quantitative traits. Am J Hum Genet 57:439–454. 68. Eaves L, Meyer J 1994 Locating human quantitative trait loci: Guidelines for the selection of sibling pairs for genotyping. Behav Genet 24:443–455. 69. Zhang H, Risch N 1996 Mapping quantitative-trait loci in humans by use of extreme concordant sib pairs: Selected sampling by parental phenotypes. Am J Hum Genet 59:951–957. 70. Risch NJ, Zhang H 1996 Mapping quantitative trait loci with extreme discordant sib pairs: Sampling considerations. Am J Hum Genet 58:836–843. 71. Risch N 1990 Linkage strategies for genetically complex traits II the power of affected relative pairs. Am J Hum Genet 46:229–241. 72. Ottman R 1990 An epidemiologic approach to gene-environment interaction. Genet Epidemiol 11:75–86. 73. Lehmann EL 1983 Theory of Point Estimation. John Wiley and Sons, New York, NY, U.S.A. 74. Suarez BK, Rice J, Reich T 1978 The generalized sib pair IBD distribution: its use in the detection of linkage. Ann Hum Genet 42:87–94. 75. Olson JM 1995 Multipoint linkage analysis using sib pairs: an interval mapping approach for dichotomous outcomes. Am J Hum Genet 56:788–798. 76. Akhter MP, Cullen DM, Pederson EA, Kimmel DB, Recker RR 1998 Bone response to in vivo mechanic loading in two breeds of mice. Calcif Tissue Int 63:442–449. Address reprint requests to: Robert R. Recker, M.D. Osteoporosis Research Center Creighton University 601 North 30th Street Omaha, NE 68131, U.S.A. Received in original form June 8, 1999; in revised form September 15, 1999; accepted October 26, 1999. APPENDIX 1. ESTIMATION OF THE RECURRENCE AND RELATIVE RISKS BY MAXIMUM LIKELIHOOD Recall that from the text, the numerical value of 1 indicates that an individual is affected with CF and 0 indicates that she is unaffected. The terms $K_1$, $K_2$, and $K_3$ are, respectively, the prevalence of CF in the probands, the sisters of the probands, and the mothers of the probands. Mathematically, \[ K_1 = \Pr(\text{proband} = 1), \] \[ K_2 = \Pr(\text{sister} = 1), \] \[ K_3 = \Pr(\text{mother} = 1), \] where $\Pr$ indicate a probability. Also defined in the text are \[ K_s = \Pr(\text{sister} = 1|\text{proband} = 1), \] \[ K_0 = \Pr(\text{proband} = 1|\text{mother} = 1), \] \[ \lambda_0 = K_0/K_1, \] and \[ \lambda_s = K_s/K_2. \] For $\lambda_0$, we have \[ \lambda_0 = K_0/K_1 \] \[ = \frac{\Pr(\text{proband} = 1|\text{mother} = 1)}{\Pr(\text{proband} = 1)} \] \[ = \frac{\Pr(\text{proband} = 1, \text{mother} = 1)}{\Pr(\text{proband} = 1) * \Pr(\text{mother} = 1)} \] \[ = \frac{\Pr(\text{mother} = 1|\text{proband} = 1) * \Pr(\text{proband} = 1)}{\Pr(\text{proband} = 1) * \Pr(\text{mother} = 1)} \] \[ = \Pr(\text{mother} = 1|\text{proband} = 1)/K_3 \] \[ = K'_0/K_3, \] where $K'_0 = \Pr(\text{mother} = 1|\text{proband} = 1)$. Therefore, for computational convenience, we will compute $\lambda_0$ via $K'_0/K_3$. For any subject in the sample, she is either a proband or a sister of a proband or a mother of a proband; furthermore, she is either affected or unaffected with CF. Therefore, conditional on the CF status of a proband, we can express various CF status of her sister or mother using the parameters defined earlier as \[ \Pr(\text{sister} = 1|\text{proband} = 1) = K_s = K_2\lambda_s, \] \[ \Pr(\text{sister} = 0|\text{proband} = 1) = 1 - K_s = 1 - K_2\lambda_s, \] \[ \Pr(\text{sister} = 1|\text{proband} = 0) = \frac{\Pr(\text{sister} = 1, \text{proband} = 0)}{\Pr(\text{proband} = 0)} \] \[ = \frac{\Pr(\text{sister} = 1) - \Pr(\text{sister} = 1, \text{proband} = 1)}{1 - \Pr(\text{proband} = 1)} \] \[ = \frac{\Pr(\text{sister} = 1) - \Pr(\text{proband} = 1) * \Pr(\text{sister} = 1|\text{proband} = 1)}{1 - K_1} \] \[ = \frac{K_2 - K_1 * K_s}{1 - K_1}, \] \[ \Pr(\text{sister} = 0|\text{proband} = 0) \] \[ = 1 - \Pr(\text{sister} = 1|\text{proband} = 0) = 1 \] \[ - \frac{K_2 - K_1 * K_2\lambda_s}{1 - K_1}. \] Similarly, \[ \Pr(\text{mother} = 1|\text{proband} = 1) = K'_0 = K_3\lambda_0, \] \[ \Pr(\text{mother} = 0|\text{proband} = 1) = 1 - K_3\lambda_0, \] \[ \Pr(\text{mother} = 1|\text{proband} = 0) = \frac{K_3 - K_1 * K_3\lambda_0}{1 - K_1}, \] \[ \Pr(\text{mother} = 0|\text{proband} = 0) = 1 - \frac{K_3 - K_1 * K_3\lambda_0}{1 - K_1}. \] Let $I$ be an index variable, so that $I = 0$ indicates that a mother is unaffected and $I = 1$ indicates that the mother is affected with CF. Then, the probability that in the $i$th family, the proband is affected, her $n_i$ sisters are affected and $m_i$ sisters are unaffected is \[ C_{n_i + m_i}^{m_i}(\lambda_s K_2)^{n_i}(1 - \lambda_s K_2)^{m_i}(\lambda_0 K_3)^{l_i}(1 - \lambda_0 K_3)^{1-l_i}, \] where $C_{n_i + m_i}^{m_i}$ is the number of various combinations of choosing $m_i$ individuals out of the total of $(m_i + n_i)$ individuals. The probability that in the $i$th family, the proband is not affected, her $n_i$ sisters are affected, and $m_i$ sisters are unaffected is \[ C_{n_i + m_i}^{m_i}\left(\frac{K_2 - K_1 K_2 \lambda_s}{1 - K_1}\right)^{n_i}\left(1 - \frac{K_2 - K_1 K_2 \lambda_s}{1 - K_1}\right)^{m_i} \] \[ \times \left(\frac{K_3 - K_1 K_3 \lambda_0}{1 - K_1}\right)^{l_i}\left(1 - \frac{K_3 - K_1 K_3 \lambda_0}{1 - K_1}\right)^{1-l_i}. \] Therefore, the likelihood ($L$) of the data is the product of these probabilities across the families of the 2396 probands. Given our data as indicated in Table 1, we have \[ L = C(\lambda_s K_2)^{32}(1 - \lambda_s K_2)^{341}(\lambda_0 K_3)^{56}(1 - \lambda_0 K_3)^{237} \] \[ * \left(\frac{K_2 - K_1 K_2 \lambda_s}{1 - K_1}\right)^{137}\left(1 - \frac{K_2 - K_1 K_2 \lambda_s}{1 - K_1}\right)^{3293} \] \[ \times \left(\frac{K_3 - K_1 K_3 \lambda_0}{1 - K_1}\right)^{304}\left(1 - \frac{K_3 - K_1 K_3 \lambda_0}{1 - K_1}\right)^{1874}, \] (1) where $C$ is a constant for the product of the coefficients, in which the magnitude is not important in the maximum likelihood estimation. Given that $K_1$, $K_2$, and $K_3$ can be easily estimated from Table 1, the maximum likelihood estimates of $\lambda_s$ and $\lambda_0$ and their variance can be obtained by standard methods via the first and second derivatives of the likelihood function $L$ with respect to $\lambda_s$ and $\lambda_0$. Briefly, the maximum likelihood estimates of $\lambda_s$ and $\lambda_0$ are the values that simultaneously satisfy the equations $$\frac{\partial L}{\partial \lambda_s} = 0$$ and $$\frac{\partial L}{\partial \lambda_0} = 0,$$ where $\partial L/\partial \lambda_s$ indicates the first partial derivative of $L$ with respect to $\lambda_s$, and $\partial L/\partial \lambda_0$ is similarly defined. The variance of $\lambda_s$ is the value of $-1/(\partial^2 \ln(L)/\partial \lambda_s^2)$ evaluated at the maximum likelihood estimates of $\lambda_s$ and $\lambda_0$. A similar approach can be adopted to write the likelihood of the whole data as functions of $K_s$ and $K_0$ to obtain the maximum likelihood estimates and the associated SDs of $K_s$ and $K_0$. **APPENDIX 2. ESTIMATION OF THE ADDITIVE ($\sigma_A^2$), DOMINANT ($\sigma_D^2$) GENETIC VARIANCE, AND NARROW-SENSE HERITABILITY ($h^2$) OF CF** Genetic variances of CF can be estimated by using the observed recurrence or relative risks and population prevalence of diseases for different sets of relatives.\(^{42,74,75}\) However, the basic method developed by these authors does not allow for the different prevalence of CF in different groups. Therefore, we generalize their method to accommodate the situation of different prevalence of CF in different groups, which is a practical situation in studies of diseases for which the occurrence of the diseases differ in different age or sex groups. Thus, our extension here should be of some general utility. Let $P$ denote the status of CF for the proband daughter and $M$ for the mother of the proband. Again, we use 1 to denote the affected status and 0 for the unaffected status. Because $P$ and $M$ are 0-1 indicator variables, $P * M = 0$ unless $P = M = 1$; hence, the covariance of the occurrence of CF in the proband daughter and her mother is $$\text{Cov}(P, M) = E(PM) - E(P) * E(M)$$ $$= \Pr(P = 1, M = 1) - \Pr(P = 1) * \Pr(M = 1)$$ $$= \Pr(P = 1 | M = 1) * \Pr(M = 1) - K_1 K_3$$ $$= K_1 K_3 - K_1 K_3$$ $$= K_3 K_1 \lambda_0 - K_1 K_3 = K_1 K_3 (\lambda_0 - 1).$$ Similarly, the covariance between proband and her sister $(S)$ is $$\text{Cov}(P, S) = K_1 K_2 (\lambda_s - 1).$$ From the principles of quantitative genetics,\(^{42}\) we know that $$\text{Cov}(P, M) = \sigma_A^2 / 2$$ and $$\text{Cov}(P, S) = \sigma_A^2 / 2 + \sigma_D^2 / 4.$$ Therefore, we have $$\sigma_A^2 = 2 K_1 K_3 (\lambda_0 - 1) \quad (1a)$$ and $$\sigma_D^2 = 4 K_1 K_2 (\lambda_s - 1) - 2 \sigma_A^2. \quad (1b)$$ The narrow-sense heritability ($h^2$) is defined\(^{42}\) as $$h^2 = \frac{\sigma_A^2}{\sigma_P^2},$$ where $\sigma_P^2$ is the phenotypic variance of CF, which can be obtained for different groups by the prevalence ($K$) of that group: $$\sigma_P^2 = K * (1 - K).$$ Therefore, in the sisters of the proband, the $h^2$ for CF is $$h^2 = \frac{2 K_1 K_3 (\lambda_0 - 1)}{K_2 (1 - K_2)}. \quad (2)$$ In the likelihood function [Eq. (1) in Appendix 1], if we substitute $\lambda_0$ with $h^2$ using the above relationship, the maximum likelihood estimate and its variance for $h^2$ can be obtained by standard means as outlined in the Appendix 1. To obtain the maximum likelihood estimates of $\sigma_A^2$, $\sigma_D^2$, and their variances, the same procedure can be adopted to substitute $\lambda_0$ and $\lambda_s$ with $\sigma_A^2$ and $\sigma_D^2$ [using the relationship of Eqs. (1a) and (1b)] into the likelihood function [Eq. (1) of Appendix 1].
Vitamin D Receptor Polymorphisms Are Associated with Altered Prognosis in Patients with Malignant Melanoma Peter E. Hutchinson, Joy E. Osborne, John T. Lear, Andrew G. Smith, P. William Bowers, Paul N. Morris, Peter W. Jones, Christopher York, Richard C. Strange, and Anthony A. Fryer Departments of Dermatology [P. E. H., J. E. O.] and Plastic Surgery [P. N. M.], Leicester Royal Infirmary, Leicester LE1 5WW; Department of Dermatology [J. T. L., A. G. S.], North Staffordshire Hospital, Stoke-on-Trent, Staffordshire ST4 7PA; Department of Dermatology, Royal Cornwall Hospitals, Truro, Cornwall TR1 3LJ [P. W. B.]; and Department of Mathematics [P. W. J.] and Centre for Pathology and Molecular Medicine, School of Postgraduate Medicine [R. C. S., A. A. F], University of Keele, Staffordshire ST5 5BG, United Kingdom ABSTRACT Calcitriol [1,25(OH)₂D₃], the hormonal derivative of vitamin D₃, is an antiproliferative and prodifferentiation factor for several cell types, including cultured melanocytes and malignant melanoma (MM) cells. Several polymorphisms of the vitamin D receptor (VDR) gene have been described including a FokI RFLP in exon 2, BsmI, and ApaI polymorphisms in intron 8 and an adjacent TaqI RFLP in exon 9. Alterations in vitamin D/1,25(OH)₂D₃ levels and polymorphisms of the VDR have been shown to be associated with several systemic malignancies. We hypothesize that polymorphism in this gene may be associated with altered susceptibility and outcome in patients with MM. A hospital-based case-control study, using 316 MM cases and 108 controls, was used to assess associations with MM susceptibility. Breslow thickness, the most important single prognostic factor in MM, was used as the outcome measure. Polymorphisms at the FokI and TaqI restriction sites were determined using PCR-based methods. Polymorphism at the FokI, but not TaqI, RFLP was associated with an altered risk of MM (P = 0.014). More importantly, variant alleles were associated with increased Breslow thickness. Thus, homozygosity for variant alleles at both RFLP (ttf genotype combination) was significantly associated with thicker tumors. (≥3.5 mm; P = 0.001; odds ratio = 31.5). Thus, polymorphisms of the VDR gene, which would be expected to result in impaired function, are associated with susceptibility and prognosis in MM. These data suggest that 1,25(OH)₂D₃, the ligand of the VDR, may have a protective influence in MM, as has been proposed for other malignancies. INTRODUCTION MM⁴ is the most serious cutaneous malignancy, and the prognosis of some tumors is very poor (1, 2). It is predominantly a disease of white-skinned people, and exposure to UV light is thought to be critical, although the relationship between risk and exposure is unclear (2). Other important risk factors for the occurrence of MM include presence of excessive numbers of banal nevi, multiple atypical nevi, fair skin, red hair, and blue or green eyes. Breslow thickness at presentation remains the most important single prognostic factor for patients with cutaneous MM (3). In general, patients with thin tumors have a much longer survival than those with thick lesions; the 5-year survival rate for lesions <1.5-mm thick is 93%, compared with 67% for 1.5 mm-3.49 mm and 37% for ≥3.5 mm (4). Risk factors for thicker tumors, and hence poorer prognosis, include age at initial presentation and tumor site. Relatively little is known of the genetic factors that mediate susceptibility to, and outcome of, sporadic MM. Several putatively important genes, including the susceptibility genes melanocyte stimulating hormone receptor (5, 6), glutathione S-transferase GSTM1 (7), and cytochrome P450 CYP2D6 (8, 9) as well as the cancer candidate genes, p16INK4a and p15INK4b (10), have been studied, although thus far only the CYP2D6 PM genotype has been associated with increased risk in independent studies. We propose that the VDR gene may influence susceptibility and outcome in MM. This view is supported by data showing that 1,25(OH)₂D₃ (the hormonal derivative of vitamin D₃ and the ligand of the VDR) has antiproliferative and prodifferentiation effects in VDR-expressing cell types (11–14). Furthermore, associations have been identified between 1,25(OH)₂D₃ and susceptibility to, and outcome of, systemic malignancies such as breast, prostate, and colon. These include association with both serum vitamin D/1,25(OH)₂D₃ levels as well as with polymorphisms in the VDR gene (15–19). Similar supportive data exist for MM. Thus, melanocytes and MM cells express the VDR, and 1,25(OH)₂D₃ has an antiproliferative effect in vitro (20, 21). For example, stimulation of tyrosinase activity, a specific prodifferentiation stimulus, has been reported in melanocytes exposed to 1,25(OH)₂D₃ (21). In vivo, there is currently little evidence of involvement of vitamin D₃, although low serum levels of 1,25(OH)₂D₃ have been reported in --- Received 8/4/99; revised 10/20/99; accepted 10/25/99. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. ¹ Supported by the Cancer Research Campaign (Project Grants SP2207/0201 and SP2402/0101). ² To whom requests for reprints should be addressed, at Department of Dermatology, Leicester Royal Infirmary, Leicester LE1 5WW, United Kingdom. Phone: 44-0116-258-5762; Fax: 44-0116-258-6792. ³ Present address: Department of Dermatology, Bristol Royal Infirmary, Bristol, BS2 8HW, UK. ⁴ The abbreviations used are: MM, malignant melanoma; VDR, vitamin D receptor; OR, odds ratio; 95% CI, 95% confidence interval; calcitriol, 1,25(OH)₂D₃; CDK, cyclin-dependent kinase. patients with MM (22). The role of sun exposure in MM is unclear. Current literature remains controversial, with most clinicians advocating a causative association between UV exposure and risk, whereas other studies support the view of a possible protective effect of vitamin D (generated at least in part by UV). For example, use of sunscreens is associated with increased MM risk, an all-year tan appears protective, and outdoor occupation appears to demonstrate no association with susceptibility to MM. Five polymorphic sites have been identified in the VDR. These comprise RFLP in exon 2 (\textit{FokI} restriction site), the last intron (\textit{BsmI} and \textit{ApaI} restriction sites), and an adjacent area of exon 9 (\textit{TaqI} restriction site) as well as a poly(A) microsatellite length polymorphism in the 3' untranslated region. The \textit{FokI} polymorphism results in an altered translation start site and has been shown to be functionally relevant (23). The other four sites demonstrate linkage disequilibrium, and there is evidence to suggest functional consequences of these polymorphisms (24). Because these data support the view that polymorphism in the VDR gene may be an important determinant of susceptibility and outcome in patients with MM, the aim of the present study was to investigate the relationship between the VDR polymorphisms and susceptibility to and prognosis (as estimated by Breslow thickness) of MM. Because there is no evidence of linkage disequilibrium between the \textit{FokI} RFLP and the cluster of polymorphisms at the 3' end of the gene and there is evidence to suggest functional consequences of each of these polymorphic regions, we have concentrated on the \textit{FokI} polymorphism and a representative example of the 3' cluster (\textit{TaqI} RFLP). **PATIENTS AND METHODS** **Patients.** All MM cases ($n = 316$) were of Northern European Caucasian extraction, originally presented between January 1994 and December 1997 and attended the Dermatology Departments at the Leicester Royal Infirmary, North Staffordshire Hospital or Royal Cornwall Hospitals, between 1996 and 1997. All tumors were histologically diagnosed as \textit{in situ} or invasive MM. Lentigo maligna and lentigo maligna melanoma were not included, in view of the biological singularity of lentigo maligna. Patients with acral tumors or those with MM and other malignant pathologies (cutaneous or internal) were also excluded. We attempted to recruit all eligible patients, although some were randomly lost in busy clinics. None of the subjects approached refused to participate. This cohort comprises $\sim 80\%$ of all eligible patients and represents a typical sample of MM patients presenting to dermatologists in the participating centers. The controls ($n = 108$) comprised randomly recruited, hospital-based Northern European Caucasians attending these Dermatology departments with basal cell papillomas and without clinical or histological evidence of malignancy. Subjects with a history of inflammatory pathology were also excluded. The study was performed with local Ethical Committee approval, and informed consent was obtained from all of the individuals recruited. Cases and controls were interviewed by a dermatologist (J. E. O., J. T. L., A. G. S., or P. W. B.). The following demographic and clinical data were recorded: patient age at presentation, gender, skin type in terms of propensity to sun burning and tanning using the Fitzpatrick classification (25), eye and hair color at age 21 years, tumor site, and Breslow thickness. Breslow thickness (defined as the vertical thickness of the tumor from the granular layer of the epidermis to the deepest part of the melanoma) was determined by specialist pathologists. On the basis of Breslow thickness, patients were divided into five categories: \textit{in situ}, $< 0.75$ mm, $0.75–1.49$ mm, $1.5–3.49$ mm, and $\geq 3.5$ mm. Table 1 shows the distribution of these clinical parameters in the total case group. As also indicated in Table 1, complete clinical data could not be obtained from all patients because of insufficient time in busy clinics (74–95\% for \textit{TaqI} and 72–92\% for \textit{FokI} genotyped cases). **Determination of VDR Genotype.** All genotyping assays were performed by workers who were unaware of the clinical status of individual subjects. PCR assays to identify VDR genotypes included one DNA sample (selected at random) of known genotype for each batch of eight samples of unknown genotype, at least one homozygous variant DNA (\textit{tt} or \textit{ff}) as a control for restriction enzyme digestion, one negative control (no DNA), and molecular weight markers. Approximately 15\% of all patient DNA samples were re-assayed on at least one occasion, and the genotype assignment was confirmed. All assignments were validated by an independent, blinded observer examining the agarose gels. DNA was extracted from peripheral blood (5 ml; collected into EDTA) using standard phenol-chloroform methods. PCR- RFLP based assays were used to identify alleles containing the exon 2 (\textit{FokI}) and exon 9 (\textit{TaqI}) variants. Primers were selected based on the methods of Gross \textit{et al}. (Ref. 26; \textit{FokI}) and Spector \textit{et al}. (Ref. 27; \textit{TaqI}) with modifications. The exon 2 wild-type (\textit{F}) and variant (\textit{f}) alleles were identified using primers 5'-AGCTGGCCCTTGCACTGACTCT-GCTCT-3' and 5'-ATGGAAACACCTTGTCTTTCTCCTT-C-3' to amplify a 265-bp product. Amplification of template DNA was performed in an incubation mixture (total volume, 50 $\mu$l) comprising 20 pmol each of primer, 200 $\mu$M deoxynucleotide triphosphates, 1.5 mM MgCl$_2$, and 1 unit of \textit{Taq} polymerase in buffer containing 10 mM Tris-HCl$^{-}$ (pH 9.0), 50 mM KCl, and 0.1\% (w/v) Triton X-100. The PCR conditions were: initial denaturation (94°C for 3 min), followed by 30 cycles of denaturation (94°C for 30 s), annealing (60°C for 30 s), and extension (72°C for 30 s), followed by a final extension at 72°C for 5 min. PCR products were then digested with \textit{FokI} (37°C for 20 h), and the products were examined after electrophoresis in 2\% agarose gels. The \textit{F} allele was refractory to digestion, whereas \textit{f} was identified by fragments of 196 and 69 bp. The \textit{TaqI} wild-type (\textit{T}) and variant (\textit{t}) alleles were identified using the forward primer from Spector \textit{et al}. (27), 5'-CAGAGCATGGACAGGGAGCAAG-3', and a novel reverse primer, 5'-CGGCAGCGGATGTACGTCTGCAG-3', to amplify a 345-bp PCR product. The PCR conditions were as for the \textit{FokI} RFLP. PCR products were then digested with \textit{TaqI} (65°C for 20 h), and the products were examined after electrophoresis in 2\% agarose gels. The \textit{T} allele was refractory to digestion, whereas \textit{t} was identified by fragments of 260 and 85 bp. We attempted to obtain genotype data from all samples, but in some earlier cases, DNA was exhausted or refractory to amplification. **Statistical Analysis.** Statistical analysis was undertaken using the Stata software package (version 5.0; Stata Corp., College Station, TX). $\chi^2$ tests were used to test for homogeneity between and within cases and controls (28). Because some frequencies were small, the StatXact-Turbo statistical package (version 3; Cytel Software Corp., Cambridge, MA) was used to obtain exact significance levels (\textit{Ps}). Logistic regression analysis was used to examine differences between cases and controls while simultaneously Table 1 Frequency of TaqI and FokI polymorphisms in controls and MM cases | | TaqI | | | | FokI | | | | |----------------|---------------|------------|------------|------------|--------------|------------|------------|------------| | | n | TT | Tt | tt | n | FF | Ff | ff | | Controls | 93 | 39 (41.9%) | 41 (44.1%) | 13 (14.0%) | 108 | 52 (48.1%) | 44 (40.7%) | 12 (11.1%) | | MM casesa | 261 | 94 (36.0%) | 127 (48.7%)| 40 (15.3%) | 293 | 105 (35.8%)| 142 (48.5%)| 46 (15.7%) | | Site | | | | | | | | | | Head/Neck | 21 | 4 (19.1%) | 14 (66.7%) | 3 (14.3%) | 22 | 13 (59.1%) | 9 (40.9%) | 0 (0.0%) | | Trunk | 85 | 31 (36.5%) | 40 (47.1%) | 14 (16.5%) | 98 | 29 (29.6%) | 50 (51.0%) | 19 (19.4%) | | U Limbs | 35 | 14 (40.0%) | 14 (40.0%) | 7 (20.0%) | 41 | 18 (43.9%) | 17 (41.5%) | 6 (14.6%) | | L Limbs | 99 | 33 (33.3%) | 51 (51.5%) | 15 (15.2%) | 108 | 36 (33.3%) | 57 (52.8%) | 15 (13.9%) | | | 240 | | | | 269 | | | | | Skin type | | | | | | | | | | 1 | 58 | 18 (31.0%) | 32 (55.2%) | 8 (13.8%) | 64 | 20 (31.3%) | 36 (56.3%) | 8 (12.5%) | | 2 | 146 | 53 (36.3%) | 66 (45.2%) | 27 (18.5%) | 152 | 56 (36.8%) | 71 (46.7%) | 25 (16.5%) | | 3 | 35 | 14 (40.0%) | 17 (48.6%) | 4 (11.4%) | 41 | 15 (36.6%) | 20 (48.8%) | 6 (14.6%) | | 4 | 5 | 2 (40.0%) | 2 (40.0%) | 1 (20.0%) | 9 | 2 (22.2%) | 5 (55.6%) | 2 (22.2%) | | | 244 | | | | 266 | | | | | Eye color | | | | | | | | | | Brown | 54 | 19 (35.2%) | 25 (46.3%) | 10 (18.5%) | 58 | 20 (34.5%) | 28 (48.3%) | 10 (17.2%) | | Blue | 141 | 50 (35.5%) | 70 (49.7%) | 21 (14.9%) | 154 | 58 (37.7%) | 76 (49.4%) | 20 (13.0%) | | Green | 38 | 19 (39.5%) | 16 (42.1%) | 7 (18.4%) | 43 | 11 (25.6%) | 24 (55.8%) | 8 (18.6%) | | Hazel | 14 | 3 (21.4%) | 10 (71.4%) | 1 (7.1%) | 15 | 5 (33.3%) | 6 (40.0%) | 4 (26.7%) | | | 247 | | | | 270 | | | | | Hair color | | | | | | | | | | Red | 22 | 7 (31.8%) | 15 (68.2%) | 0 (0.0%) | 23 | 9 (39.1%) | 14 (60.9%) | 0 (0.0%) | | Blonde | 34 | 12 (35.3%) | 16 (47.1%) | 6 (17.7%) | 40 | 14 (35.0%) | 22 (55.0%) | 4 (10.0%) | | Brown | 136 | 52 (38.2%) | 62 (45.6%) | 22 (16.2%) | 145 | 49 (33.8%) | 64 (44.1%) | 32 (22.1%) | | Black | 1 | 0 (0.0%) | 1 (100.0%) | 0 (0.0%) | 3 | 1 (33.3%) | 2 (66.7%) | 0 (0.0%) | | | 193 | | | | 211 | | | | | Breslowc | | | | | | | | | | In situ | 40 | 13 (32.5%) | 22 (55.0%) | 5 (12.5%) | 46 | 14 (30.4%) | 23 (50.0%) | 9 (19.6%) | | 0.1–0.74 mm | 72 | 28 (38.9%) | 33 (45.8%) | 11 (15.3%) | 75 | 27 (36.0%) | 35 (46.7%) | 13 (17.3%) | | 0.75–1.4 mm | 47 | 18 (38.3%) | 24 (51.1%) | 5 (10.6%) | 62 | 24 (38.7%) | 29 (46.8%) | 9 (14.5%) | | 1.5–3.4 mm | 35 | 9 (25.7%) | 18 (51.4%) | 8 (22.9%) | 38 | 13 (34.2%) | 19 (50.0%) | 6 (15.8%) | | ≥3.5 mm | 12 | 4 (33.3%) | 4 (33.3%) | 4 (33.3%) | 14 | 2 (14.3%) | 8 (57.1%) | 4 (28.6%) | | | 206 | | | | 235 | | | | a Analysis was performed using logistic regression. b Proportion of subjects with the FF genotype in cases versus controls; \( P = 0.026 \); OR, 0.60; 95% CI, 0.38–0.94 (uncorrected); and \( P = 0.029 \); OR, 0.59; 95% CI, 0.37–0.95 (corrected for age and gender). c Proportion of patients with the tt genotype in MM cases with a Breslow thickness of ≥1.5 mm compared with <1.5 mm; \( P = 0.047 \); OR, 2.25; 95% CI, 1.01–5.02 (uncorrected); and \( P = 0.131 \); OR, 1.92; 95% CI, 0.82–4.48 (corrected for age and gender). Correcting for imbalances in age and gender. Logistic regression was also used to examine differences in genotype frequencies between cases stratified by Breslow thickness, while correcting for age at presentation and gender. Significant associations of combined genotypes (e.g., tff) were only accepted if they remained significant in the presence of the main effects (i.e., a model including tff, tt, and ff). If the significance of the combined genotype disappeared, this would suggest that the factors were acting independently, and the significance of the combined effect was driven by the strength of either (or both) of the component factors. Associations of Breslow thickness were confirmed using linear regression after transformation of thickness values to normality and correction for age at presentation and gender. Because some tumors were in situ (0 mm thick), transformation was performed using the formula: In (Breslow thickness + 1). RESULTS Three hundred and sixteen patients with MM (mean age ± SD, 53.3 ± 16.7 years; 67% female) and 108 controls (mean age ± SD, 55.7 ± 19.7 years; 50% female) were recruited. Case-Control Analysis. Table 2A shows the allele frequencies of TaqI and FokI alleles in controls and MM cases. All allele frequencies conformed to Hardy-Weinberg equilibrium. The F allele was significantly less common in MM cases than controls (\( P = 0.029 \); OR, 0.69; 95% CI, 0.50–0.96). Table 2B shows the relationship between TaqI and FokI genotypes in cases and controls. No significant correlations between genotypes at the two sites were identified in either controls (\( P = 0.365 \)) or cases (\( P = 0.847 \)), suggesting that the two polymorphisms did not demonstrate linkage disequilibrium. Table 1 shows frequencies of FokI and TaqI genotypes in controls compared with MM cases. There was a decreased proportion of individuals with the FokI FF genotype in cases versus controls. Thus, for FF versus other FokI genotypes, the uncorrected OR for MM was 0.60 (\( P = 0.026 \)). The findings remained significant after correction for age and gender using multivariate logistic regression (OR, 0.59; \( P = 0.029 \)). The estimated risk reduction attributable to the FF genotype was 23.7% (95% CI, 1.2–51.3%). Table 2 Frequency of TaqI and FokI alleles and concordance between genotypes in controls and MM cases | | TaqI | | | FokI | | | |----------------|--------------|------------|------------|--------------|------------|------------| | | n | T | t | n | F | f | | Controls | 186 | 119 (64.0%)| 67 (36.0%) | 216 | 148 (68.5%)| 68 (31.5%) | | MM casesa | 522 | 315 (60.3%)| 207 (39.7%)| 586 | 352 (60.1%)| 234 (39.9%)| B. Concordance | FokI | TaqI | TT | Tr | tt | TaqI | TT | Tr | tt | |------|------|----|----|----|------|----|----|----| | FF | 17 | 12 | 7 | 30 | 49 | 14 | 15.0% | | Ff | 17 | 17 | 5 | 44 | 57 | 21 | 17.2% | | ff | 3 | 6 | 0 | 16 | 19 | 5 | 12.5% | a Proportion of subjects with the F allele in cases versus controls; $P = 0.029$; OR, 0.69; 95% CI, 0.50–0.96. Association of VDR Genotype with Patient Characteristics. Table 1 shows the frequency of VDR genotypes in the MM cases stratified by patient characteristics. There was no significant association between genotype frequencies and tumor site, skin type, or eye color. However, the ff and tt genotypes were significantly less common in MM patients with red hair than in patients with other hair colors ($P = 0.021$, $\chi^2_1 = 5.31$ and $P = 0.040$, $\chi^2_1 = 4.21$, respectively). Association for VDR Genotypes with Breslow Thickness. Patients were categorized by Breslow thickness (Table 1). In tumors $\geq 1.5$-mm thick, for TaqI, there was an increased proportion of the tt genotype ($P = 0.047$), but there was no obvious effect for FokI ($P = 0.701$). Homozygosity for variant alleles at either FokI and TaqI loci (tt or ff genotypes) was associated with an increased proportion of tumors $\geq 3.5$ mm thick, although this did not achieve statistical significance (tt: $P = 0.105$; OR, 2.84; and ff: $P = 0.266$; OR, 1.99, uncorrected). The effects of combinations of the TaqI and FokI polymorphisms are shown in Table 3. There was an association of ttff combined genotype with thicker tumors, using either $\geq 1.5$ mm ($P = 0.065$) or $\geq 3.5$ mm ($P < 0.001$) as the cutoff. These results retained similar significance, particularly for tumors $\geq 3.5$-mm thick, after correction for potential confounding factors (age, gender, and tumor site). Thus, the mean Breslow thickness in patients with the ttff genotype combination was 2.9 mm compared with 1.1 mm in patients with other genotype combinations. This association was further confirmed using linear regression analysis, which showed that the ttff genotype was correlated with Breslow thickness ($P = 0.002$, transformed to normality and corrected for age and gender). Significant associations were also identified between Breslow thickness and combinations of genotypes including the genotypes Ttff and ttFf, although these were less effective at predicting Breslow thickness. DISCUSSION We have postulated that polymorphism in the VDR gene is important in MM. This hypothesis is supported by data showing that 1,25(OH)$_2$D$_3$ inhibits cell proliferation (12, 13) and stimulates differentiation (11, 14) and apoptosis (29) in several cell types expressing the VDR. There is evidence that 1,25(OH)$_2$D$_3$ has an anticancer effect in several systemic cancers such as breast (30), prostate (31), colon (32), leukemia (33), and kidney (34). Furthermore, *in vitro* studies have demonstrated that 1,25(OH)$_2$D$_3$ inhibits growth of cultured malignant cells (11–14, 34) and inhibits experimental carcinogenesis (35, 36). *In vivo*, decreased mean serum levels of 1,25(OH)$_2$D$_3$ or its precursors have been reported in carcinoma of the breast (15), prostate (17), and colon (19). More recently, polymorphisms of the VDR have been reported associated with cancer of the breast [FokI and poly(A) site RFLP; Ref. 16] and prostate [BsmI and poly(A) site RFLP; Refs. 18 and 37]. Data for MM are similar, although more limited. Normal (38) and malignant melanocytes (20) express the VDR, and 1,25(OH)$_2$D$_3$ has been shown to inhibit normal (21) and malignant melanocyte (20) growth *in vitro*. In a study of 1,25(OH)$_2$D$_3$ serum levels in MM patients, lower levels were found compared with controls, although this did not achieve statistical significance (22). In our study, homozygosity for the wild-type (F) allele at the FokI restriction sites was associated with a reduced risk of MM, with a risk reduction attributable to the FF genotype estimated at 23.7%. Furthermore, the proportion of F alleles was significantly lower in the case group compared with controls. The number of controls, however, was relatively small, and larger cohorts would be required to reduce the risk of both type I and type 2 errors. In this initial study, we have used hospital-based controls. Selection of control subjects is always difficult, and although the use of “normal” volunteers or blood donors would reduce the risk of potential bias because of occult associations with other disease processes, since they are generally not examined by a clinician, the possibility of undetected malignant or inflammatory pathologies cannot be excluded. By use of hospital-based controls, it was possible to focus only on controls who were clinically free of other malignant or inflammatory pathologies. Furthermore, the control genotype frequencies were similar to those described in other studies (18, 26, 27, 39), supporting the view that our control group is representative of the normal population. The FokI RFLP has been reported previously to be associated with breast cancer (16), where the FF genotype was Table 3 Interactions between VDR genotypes and association with Breslow thickness Significant associations of combined genotypes (e.g., ttf) were only accepted if they remained significant in the presence of the main effects (i.e., a model including ttf, tt, and ff). The reference category is all other genotype combinations (e.g., all other patients except those with ttf). | Genotype combination | <1.5 mm | ≥1.5 mm | P | OR | 95% CI | |----------------------|---------|---------|-----|-------|------------| | ttf<sup>a</sup> | 2/158 (1.3%) | 3/45 (6.7%) | 0.065 | 5.6 | 0.9–34.4 | | ttf<sup>b</sup> | | | 0.023 | 9.2 | 1.4–61.8 | | ttf<sup>c</sup> | | | 0.062 | 7.2 | 0.9–57.2 | | ttf or ttFf<sup>a</sup> | 10/158 (6.3%) | 9/45 (20.0%) | 0.008 | 3.7 | 1.4–9.8 | | ttf or ttFf<sup>b</sup> | | | 0.009 | 3.9 | 1.4–11.0 | | ttf or ttFf<sup>c</sup> | | | 0.007 | 4.3 | 1.5–12.5 | | ttf or Ttf<sup>a</sup> | 17/158 (10.8%) | 6/45 (13.3%) | 0.631 | 1.3 | 0.5–3.5 | | ttf or Ttf<sup>b</sup> | | | 0.292 | 1.8 | 0.6–5.1 | | ttf or Ttf<sup>c</sup> | | | 0.336 | 1.7 | 0.6–5.4 | | Genotype combination | <3.5 mm | ≥3.5 mm | P | OR | 95% CI | |----------------------|---------|---------|-----|-------|------------| | ttf<sup>a</sup> | 2/191 (1.1%) | 3/12 (25.0%) | <0.001 | 31.5 | 4.7–212.7 | | ttf<sup>b</sup> | | | <0.001 | 93.2 | 9.4–926.6 | | ttf<sup>c</sup> | | | <0.001 | 108.5 | 8.2–1438.8 | | ttf or ttFf<sup>a</sup> | 16/191 (8.4%) | 3/12 (25.0%) | 0.071 | 3.6 | 0.9–14.8 | | ttf or ttFf<sup>b</sup> | | | 0.075 | 3.8 | 0.9–16.4 | | ttf or ttFf<sup>c</sup> | | | 0.090 | 4.8 | 0.8–29.5 | | ttf or Ttf<sup>a</sup> | 19/191 (9.9%) | 4/12 (33.3%) | 0.022 | 4.5 | 1.2–16.4 | | ttf or Ttf<sup>b</sup> | | | 0.005 | 7.8 | 1.8–32.9 | | ttf or Ttf<sup>c</sup> | | | 0.006 | 12.3 | 2.1–73.1 | <sup>a</sup> Uncorrected data. <sup>b</sup> Corrected for age at presentation and gender. <sup>c</sup> Corrected for age at presentation, gender, and head/neck tumor site. associated with a decreased risk of ~50% in certain racial groups. The poly(A) polymorphism (classified into long, L, or short, S) has been associated with altered risk of breast (16) and prostate (18) cancer. In breast cancer, LL and LS alleles were also associated with a ~50% reduction in risk (16). However, in prostate cancer, the presence of L, whether in the heterozygous (LL) or homozygous (LS) state, was associated with a 4–5-fold increased risk of prostate cancer (18, 37). Because the TaqI restriction site is in strong linkage disequilibrium with the poly(A) polymorphism (T demonstrates linkage disequilibrium with L; Ref. 39), the findings in breast cancer are comparable with our findings in MM, although our data on the TaqI RFLP did not achieve statistical significance. Our data also identified an association between VDR genotypes and red hair in patients with MM. There was insufficient hair color data on our control subjects to examine whether this was a general phenomenon. Although the mechanism for this association is not known, other studies have identified links between polymorphism at other loci and hair color in MM (6, 8). These data suggest that the molecular route by which patients with red hair develop MM may differ from patients with other hair colors, supporting the view that these patients represent a high risk subgroup. However, these data require confirmation in independent studies, including in control individuals. More significantly, we have identified significant associations between VDR genotypes and outcome in patients with MM. Thus, our data suggest that VDR polymorphism is a better determinant of outcome in MM than of its initiation. Melanoma depth is well recognized as an important prognostic indicator with respect to risk of metastatic disease and survival (40). In general, for both restriction sites, the proportion of thick MMs (with either ≥1.5 or ≥3.5 mm cutoff) increased with increasing number of variant alleles. The effect of VDR genotypes on Breslow thickness was markedly increased when the two polymorphic sites were considered together. Thus, the combined ttf genotype was associated with tumors ≥1.5-mm thick but particularly those ≥3.5-mm thick (P < 0.001; Table 3). We also corrected the data for the potential confounding effects of gender, tumor site, and age at presentation because thicker tumors are associated with male gender, head/neck tumor site, and older age. The association of the ttf genotype remained significant, suggesting that the effect on Breslow thickness is independent of these factors. Similar results were obtained with other genotype combinations, although the magnitudes of the effects were smaller, suggesting that the heterozygote genotypes were of intermediate importance in determining Breslow thickness. Similarly, in carcinoma of the prostate, poly(A) microsatellite variants are reported to be associated with more advanced disease (37). In addition, low serum levels of 1,25(OH)<sub>2</sub>D<sub>3</sub> have been implicated in metastatic rather than in situ disease in prostatic cancer, suggesting an impact on tumor progression rather than development (17). The polymorphism at the FokI restriction site (T-C transition) produces an ATG start codon resulting in translation initiation 10 bp upstream and therefore the production of a lengthened protein of 427 amino acids (26, 41). The F allele (restriction site absent, ACG), which results in a shorter protein, has been shown to be more effective at activating the transcription of a VDR reporter construct (23), thereby indicating that the polymorphism is functionally significant. The cluster of polymorphisms at the 3' end of VDR, which includes *TaqI*, are in mutual tight linkage disequilibrium and a representative, *Bsm*, is known to be in linkage disequilibrium with the poly(A) microsatellite (39). It has been suggested that the length of the poly(A) repeat affects mRNA stability or is tightly linked to a further functionally significant site (24). The net effect of the *ff* and *tt* polymorphisms can be envisaged as a reduction in the cellular effect of 1,25(OH)$_2$D$_3$ and therefore a growth advantage of the melanocytes. This conclusion is supported by the increased effect of combined homozygosity, which would be expected to have a more profound effect on the VDR protein. These data support the hypothesis that the *VDR* genotype has a significant role in determining tumor occurrence and behavior in MM and indicate a role for vitamin D in melanoma cell cycle control and differentiation *in vivo*. There is evidence of a blocking effect of 1,25(OH)$_2$D$_3$ at the transition from G$_1$ to S phase of the cell cycle via several mechanisms, such as stimulation of the CDK inhibitory proteins, P21 (42), which contains a VDR response element (43), and P27 (44), and inhibition of cyclin D$_1$ (45). It is of interest that the other reported genetic changes associated with melanoma also have an impact at the G$_1$ to S-phase check point. These include mutations of the CDK inhibitor genes, *p16INK4a* (46–48) and *p15INK4b* (10), and mutations in the *CDK4* gene (49, 50). This point in the cell cycle may therefore be of pivotal importance in the development and/or progression of MM. It has been argued above that the effect of the VDR polymorphisms, reported here, is a functional, cellular deficiency of 1,25(OH)$_2$D$_3$. Decreased serum levels of 1,25(OH)$_2$D$_3$ have been reported in certain cancers (15, 17, 19), including MM (22). Furthermore, some studies suggest that vitamin D deficiency, resulting from decreased cutaneous production from solar irradiation, may be contributory to the development of carcinoma of the breast (51), prostate (52), and colon (53). The role of sun exposure in MM is, however, more complex. On the one hand, it is firmly held by the majority of clinicians that solar radiation is causative in MM, and on the other, there is the potential deleterious effect of vitamin D deficiency, a cause of which is lack of sun exposure. Inhibition of MM cell growth *in vitro* by 1,25(OH)$_2$D$_3$ (20), the effect of VDR polymorphisms described in this study, and low serum levels of vitamin D (22) implicate a possible role of vitamin D deficiency in MM pathogenesis. In addition, a protective effect of vitamin D (produced at least in part by sun exposure) might explain previous ambiguous results, such as the increased incidence of melanoma associated with sunscreen use (54), the protective effect of an all-year tan (55), and the lack of an association of MM with outdoor occupation (56). The effect on vitamin D status in these circumstances warrants further investigation, particularly in the climates where these associations have been reported. **REFERENCES** 1. Boyle P., Maisonneuve, P., and Dore, J.-F. Epidemiology of malignant melanoma. *Br. Med. Bull.*, 51: 523–547, 1995. 2. Rivers, J. K. Melanoma. *Lancet*, 347: 803–806, 1996. 3. Mackie, R. M., Smyth, J. F., Soutar, D. S., Watson, A. C. H., McLaren, K. M., McPheie, J. L., Hutcheon, A. W., Smyth, J. F., Calman, K. C., Hunter, J. A. A., MacGillivray, J. B., Rankin, R., and Kemp, I. W. Malignant melanoma in Scotland 1979–1983. *Lancet*, 2: 859–862, 1985. 4. MacKie, R., Hunter, J. A., Aitchison, T. C., Hole, D., McLaren, K., Rankin, R., Blessing, K., Evans, A. T., Hutcheon, A. W., and Jones, D. H. Cutaneous malignant melanoma, Scotland, 1979–1989. The Scottish Melanoma Group. *Lancet*, 339: 971–975, 1992. 5. Valverde, P., Healy, E., Sikkink, S., Haldane, F., Thody, A. J., Carothers, A., Jackson, I., and Rees, J. L. The Asp84Glu variant of the melanocortin-1 receptor (MC1R) is associated with melanoma. *Hum. Mol. Genet.*, 5: 1663–1666, 1996. 6. Ichii-Jones, F., Lear, J. T., Heagerty, A. H. M., Smith, A. G., Hutchinson, P. E., Osborne, J., Bowers, B., Jones, P. W., Davies, E., Ollier, W. E. R., Thomson, W., Yengi, L., Bath, J., Fryer, A. A., and Strange, R. C. Susceptibility to melanoma: influence of skin type and polymorphism in the melanocyte stimulating hormone receptor gene. *J. Investig. Dermatol.*, 111: 218–221, 1998. 7. Lafuente, A., Molina, R., Palou, J., Castel, T., Moral, A., and Trias, M. Phenotype of glutathione S-transferase Mu (GSTMI) and susceptibility to malignant melanoma. MMM group. Multidisciplinary Malignant Melanoma Group. *Br. J. Cancer*, 72: 324–326, 1995. 8. Wolf, C. R., Smith, C. A. D., Gough, A. C., Moss, J. E., Vallis, K. A., Howard, G., Carey, F. J., Mills, K., McNee, W., Carmichael, J., and Spurr, N. Relationship between the debrisoquine polymorphism and cancer susceptibility. *Carcinogenesis (Lond.)*, 13: 1035–1038, 1992. 9. Strange, R. C., Ellison, T., Ichii-Jones, F., Bath, J., Hoban, P., Lear, J. T., Smith, A. G., Hutchinson, P. E., Osborne, J., Bowers, B., Jones, P. W., and Fryer, A. A. Cytochrome P450 CYP2D6 genotypes: association with hair colour, Breslow thickness and melanocyte stimulating hormone receptor alleles in patients with malignant melanoma. *Pharmacogenetics*, 9: 269–276, 1999. 10. Wagner, S. N., Wagner, C., Briedigkeit, L., and Goos, M. Homozygous deletion of the *p16INK4a* and the *p15INK4b* tumour suppressor genes in a subset of human sporadic cutaneous malignant melanoma. *Br. J. Dermatol.*, 138: 13–21, 1998. 11. Mangelsdorf, D. J., Koeffler, H. P., Donaldson, C. A., Pike, J. W., and Haussler, M. R. 1,25-Dihydroxyvitamin D3-induced differentiation in a human promyelocytic leukemia cell line (HL-60): receptor-mediated maturation to macrophage-like cells. *J. Cell. Biol.*, 98: 391–398, 1984. 12. Peehl, D. M., Skowronski, R. J., Leung, G. K., Wong, S. T., Stamey, T. A., and Feldman, D. Antiproliferative effects of 1,25-dihydroxyvitamin D3 on primary cultures of human prostatic cells. *Cancer Res.*, 54: 805–810, 1994. 13. Shahbhang, M., Buras, R. R., Davoodi, F., Schumaker, L. M., Nauta, R. J., and Evans, S. R. 1,25-Dihydroxyvitamin D3 receptor as a marker of human colon carcinoma cell line differentiation and growth inhibition. *Cancer Res.*, 53: 3712–3718, 1993. 14. Frappart, L., Falette, N., Lefebvre, M. F., Bremond, A., Vauzelle, J. L., and Saez, S. *In vitro* study of effects of 1,25 dihydroxyvitamin D3 on the morphology of human breast cancer cell line BT20. *Differentiation*, 40: 63–69, 1989. 15. Janowsky, E. C., Hulka, B. S., and Lester, G. E. Vitamin D levels as a risk for female breast cancer. *In*: A. W. Norman, R. Bouillon, and M. Thomasset (eds.), Vitamin D: A Pluripotent Steroid Hormone: Structural Studies, Molecular Endocrinology and Clinical Applications, pp. 496–497. Berlin: de Gruyter, 1994. 16. Ingles, S. A., Haile, R., Henderson, B., Kolonel, L., and Gerhard, C. Association of vitamin D receptor genetic polymorphism with breast cancer risk in African-American and Hispanic women. *In*: A. W. Norman, R. Bouillon, and M. Thomasset (eds.), Vitamin D. Chemistry, Biology and Clinical Applications of the Steroid Hormone, pp. 813–814. University of California, Riverside, CA, 1997. 17. Corder, E. H., Guess, H. A., and Hulka, B. S. Vitamin D and prostate cancer: a prediagnostic study with stored sera. *Cancer Epidemiol. Biomark. Prev.*, 2: 467–472, 1993. 18. Taylor, J. A., Hirvonen, A., Watson, M., Pittman, G., Mohler, J. L., and Bell, D. A. Association of prostate cancer with vitamin D receptor gene polymorphism. *Cancer Res.*, 56: 4108–4110, 1996. 19. Glass, A. R., Kikendall, J. W., Sobin, L. H., and Bowen, P. E. Serum 25-hydroxyvitamin D concentrations in colonic neoplasia. *Horm. Metab. Res.*, 25: 397–398, 1993. 20. Colston, K., Colston, M. J., and Feldman, D. 1,25-Dihydroxyvitamin D$_3$ and malignant melanoma: the presence of receptors and inhibition of cell growth in culture. *Endocrinology*, *108*: 1083–1086, 1981. 21. Ranson, M., Posen, S., and Mason, R. Human melanocytes as a target tissue for hormones: *in vitro* studies with 1α,25-dihydroxyvitamin D$_3$, α-melanocyte stimulating hormone, and β-estradiol. *J. Investig. Dermatol.*, *91*: 593–598, 1988. 22. Cornwell, M. L., Comstock, G. W., Holick, M. F., and Bush, T. L. Prediagnostic serum levels of 1,25-dihydroxyvitamin D and malignant melanoma. *Photodermatol. Photoinmunol. Photomed.*, *9*: 109–112, 1992. 23. Haussler, M. R., Jurutka, P. W., Haussler, C. A., Hsieh, J-C., Thompson, P. D., Remus, L. S., Selznick, S. H., Encinas, C., and Whitfield, G. K. VDR-mediated transactivation: interplay between 1,25(OH)$_2$D$_3$, RXR heterodimerization, transcription (co)actors and polymorphic receptor variants. In: R. Norman, R. Bouillon, and M. Thomasset (eds.), Vitamin D. Chemistry, Biology and Clinical Applications of the Steroid Hormone, pp. 210–217. University of California, Riverside, CA, 1997. 24. Haussler, M. R., Whitfield, G. K., Haussler, C. A., Hsieh, J. C., Thompson, P. D., Selznick, S. H., Dominguez, C. E., and Jurutka, P. W. The nuclear vitamin D receptor: biological and molecular regulatory properties revealed. *J. Bone Miner. Res.*, *13*: 325–349, 1998. 25. Fitzpatrick, T. B. The validity and practicality of sun reaction skin types I through VI. *Arch. Dermatol.*, *124*: 869–871, 1988. 26. Gross, C., Eccleshall, T. R., Malloy, P. J., Villa, M. L., Marcus, R., and Feldman, D. The presence of a polymorphism at the translation initiation site of the vitamin D receptor gene is associated with low bone mineral density in postmenopausal Mexican-American women. *J. Bone Miner. Res.*, *11*: 1850–1855, 1996. 27. Spector, T. D., Keen, R. W., Arden, N. K., Morrison, N. A., Major, P. J., Nguyen, T. V., Kelly, P. J., Baker, J. R., Sambock, P. N., Lanchbury, J. S., and Eisman, J. A. Influence of vitamin D receptor genotype on bone mineral density in postmenopausal women: a twin study in Britain. *Br. Med. J.*, *310*: 1357–1360, 1995. 28. Altman, D. G. Practical Statistics for Medical Research, pp. 241–265. London: Chapman and Hall, 1991. 29. Flanagan, L., Ethier, S., and JoEllen, W. Vitamin D-induced apoptosis in estrogen independent breast cancer cells and tumours. In: A. Norman, R. Bouillon, and M. Thomasset (eds.), Vitamin D. Chemistry, Biology and Clinical Applications of the Steroid Hormone, pp. 459–460. University of California, Riverside, CA, 1997. 30. Frampton, R. J., Omond, S. A., and Eisman, J. A. Inhibition of human cancer cell growth by 1,25-dihydroxyvitamin D$_3$ metabolites. *Cancer Res.*, *43*: 4443–4447, 1983. 31. Skowronski, R. J., Peehl, D. M., and Feldman, D. Vitamin D and prostate cancer: 1,25 dihydroxyvitamin D$_3$ receptors and actions in human prostate cancer cell lines. *Endocrinology*, *132*: 1952–1960, 1993. 32. Cross, H. M., Pavelka, M., Slavik, J., and Peterlik, M. Growth control of human colon cancer cells by vitamin D and calcium *in vitro*. *J. Natl. Cancer Inst.*, *84*: 1355–1357, 1992. 33. Koeffler, H. P., Hirji, K., and Irrl, L. 1,25-Dihydroxyvitamin D$_3$ *in vivo* and *in vitro* effects on human preleukaemic and leukaemic cells. *Cancer Treat. Rep.*, *69*: 1399–1407, 1985. 34. Nagakura, K., Abe, E., Suda, T., Hayakawa, M., Nakamura, H., and Tazaki, H. Inhibitory effect of 1α,25-dihydroxyvitamin D$_3$ on the growth of the renal carcinoma cell line. *Kidney Int.*, *29*: 834–840, 1986. 35. Beatty, M. M., Lee, E. Y., and Glauert, H. P. Influence of dietary calcium and vitamin D on colon epithelial cell proliferation and 1,2-dimethylhydrazine induced colon carcinogenesis in rats fed high fat diets. *J. Nutr.*, *123*: 144–152, 1993. 36. Chida, K., Hashiba, H., Fukushima, M., Suda, T., and Kuroki, T. Inhibition of tumour promotion in mouse skin by 1,25-dihydroxyvitamin D$_3$. *Cancer Res.*, *45*: 5426–5430, 1985. 37. Ingles, S. A., Ross, R. K., Yu, M. C., Irvine, R. A., La Pera, G., Haile, R. W., and Coetzee, G. A. Association of prostate cancer risk with genetic polymorphisms in vitamin D receptor and androgen receptor. *J. Natl. Cancer Inst.*, *89*: 166–170, 1997. 38. Milde, P., Hauser, U., Simon, T., Mall, G., Ernst, V., Haussler, M., Frosch, P., and Rauterberg, W. Expression of 1,25-dihydroxyvitamin D3 receptors in normal and psoriatic skin. *J. Investig. Dermatol.*, *97*: 230–239, 1991. 39. Ingles, S. A., Haile, R. W., Henderson, B. E., Kolonel, L. N., Nakaichi, G., Shi, C. Y., Yu, M. C., Ross, R. K., and Coetzee, G. A. Strength of linkage disequilibrium between two vitamin D receptor markers in five ethnic groups: implications for association studies. *Cancer Epidemiol. Biomark. Prev.*, *6*: 93–98, 1997. 40. Reintgen, D. S., Cox, C., Slingluff, C. L. J., and Seigler, H. F. Recurrent malignant melanoma: the identification of prognostic factors to predict survival. *Ann. Plast. Surg.*, *28*: 45–49, 1992. 41. Miyamoto, K., Kesterson, R. A., Yamamoto, H., Taketani, Y., Nishiwaki, E., Tatsumi, S., Inoue, Y., Morita, K., Takeda, E., and Pike, J. Structural organisation of the human vitamin D receptor chromosomal gene and its promoter. *Mol. Endocrinol.*, *11*: 1165–1179, 1997. 42. Liu, M., Lee, M. H., Bommakanti, M., and Freedman, L. P. Vitamin D$_3$ transcriptionally activates the p21 gene leading to the induced differentiation of the myelomonocytic cell line U937. *Genes Dev.*, *10*: 142–152, 1996. 43. Liu, M., Lee, M. H., Cohen, M., Bommakanti, M., and Freedman, L. P. Transcriptional activation of the Cdk inhibitor p21 by vitamin D3 leads to the induced differentiation of the myelomonocytic cell line U937. *Genes Dev.*, *10*: 142–153, 1996. 44. Wang, Q. M., Jones, J. B., and Studzinski, G. P. Cyclin-dependent kinase inhibitor p27 as a mediator of the G1-S phase block induced by 1,25-dihydroxyvitamin D$_3$ in HL60 cells. *Cancer Res.*, *56*: 264–267, 1996. 45. Verlinden, L., Verstuyf, A., Convents, R., Marcelis, S., Van Camp, M., and Bouillon, R. Action of 1,25(OH)$_2$D$_3$ on the cell cycle genes, cyclin D1, p21 and p27 in MCF-7 cells. *Mol. Cell. Endocrinol.*, *142*: 57–65, 1998. 46. Piccinin, S., Doglioni, C., Maestro, R., Vukosavljevic, T., Gasparotto, D., D’Orazi, C., and Boiocchi, M. p16/CDKN2 and CDK4 gene mutations in sporadic melanoma development and progression. *Int. J. Cancer*, *74*: 26–30, 1997. 47. Ohta, M., Berd, D., Shimizu, M., Nagai, H., Cotticelli, M. G., Masangangelo, M., Shields, J. A., Shields, C. L., Croce, C. M., and Huebner, K. Deletion mapping of chromosome region 9p21–p22 surrounding the *CDKN2* locus in melanoma. *Int. J. Cancer*, *65*: 762–767, 1996. 48. Platz, A., Hansson, J., Mansson-Brahme, E., Lagerlof, B., Linder, S., Lundqvist, E., Sevigny, P., Inganas, M., and Ringborg, U. Screening of germline mutations in the *CDKN2A* and *CDKN2B* genes in Swedish families with hereditary cutaneous melanoma. *J. Natl. Cancer Inst.*, *89*: 697–702, 1997. 49. Soufir, N., Avril, M. F., Chompret, A., Demenais, F., Bombled, J., Spatz, A., Stoppa-Lyonnet, D., Benard, J., and Bressac-de Paillerets, B. Prevalence of p16 and CDK4 germline mutations in 48 melanoma-prone families in France. The French Familial Melanoma Study Group. *Hum. Mol. Genet.*, *7*: 209–216, 1998. 50. Tsao, H., Benoit, E., Sober, A. J., Thiele, C., and Haluska, F. G. Novel mutations in the p16/CDKN2A binding region of the *cyclin-dependent kinase-4* gene. *Cancer Res.*, *58*: 109–113, 1998. 51. Garland, F. C., Garland, C. F., Gorham, E. D., and Young, J. F. Geographic variation in breast cancer mortality in the United States: a hypothesis involving exposure to solar radiation. *Prev. Med.*, *19*: 614–622, 1990. 52. Hanchette, C. L., and Schwartz, G. G. Geographic patterns of prostate cancer mortality. *Cancer (Phila.)*, *70*: 2861–2869, 1992. 53. Gorham, E. D., Garland, C. F., and Garland, F. C. Acid haze air pollution and breast and colon cancer mortality in 20 Canadian cities. *Can. J. Public Health*, *80*: 96–100, 1989. 54. Autier, P., Dore, J. F., Schöfflirs, E., Cesari, J. P., Bollaerts, A., Koelmel, K. F., Gefeller, O., Liabeuf, A., Lejeune, F., and Lienard, D. Melanoma and use of sunscreens: an EORTC case-control study in Germany, Belgium, and France. The EORTC Melanoma Cooperative Group. *Int. J. Cancer*, *61*: 749–755, 1995. 55. Holly, E. A., Aston, D. A., Cress, R. D., Ahn, D. K., and Kristiansen, J. J. Cutaneous melanoma in women. *Am. J. Epidemiol.*, *141*: 923–933, 1995. 56. Pion, I. A., Rigel, D. S., GarfinkeI, L., Silverman, M. K., and Kopf, A. W. Occupation and the risk of malignant melanoma. *Cancer (Phila.)*, *75*: 637–644, 1995. Vitamin D Receptor Polymorphisms Are Associated with Altered Prognosis in Patients with Malignant Melanoma Peter E. Hutchinson, Joy E. Osborne, John T. Lear, et al. *Clin Cancer Res* 2000;6:498-504. **Updated version** Access the most recent version of this article at: http://clincancerres.aacrjournals.org/content/6/2/498 **Cited articles** This article cites 45 articles, 12 of which you can access for free at: http://clincancerres.aacrjournals.org/content/6/2/498.full#ref-list-1 **Citing articles** This article has been cited by 10 HighWire-hosted articles. Access the articles at: http://clincancerres.aacrjournals.org/content/6/2/498.full#related-urls **E-mail alerts** Sign up to receive free email-alerts related to this article or journal. **Reprints and Subscriptions** To order reprints of this article or to subscribe to the journal, contact the AACR Publications Department at firstname.lastname@example.org. **Permissions** To request permission to re-use all or part of this article, use this link http://clincancerres.aacrjournals.org/content/6/2/498. Click on "Request Permissions" which will take you to the Copyright Clearance Center's (CCC) Rightslink site.
How Genetic Algorithms can Improve a Pacemaker Efficiency Laurent Dumas Laboratoire Jacques-Louis Lions Université Pierre et Marie Curie 75252 Paris Cedex 05, France email@example.com Linda El Alaoui Projet Reo – INRIA Rocquencourt B.P. 105 78153 Le Chesnay Cedex, France firstname.lastname@example.org ABSTRACT In this paper, we propose the use of Genetic Algorithms as a tool for improving a pacemaker efficiency in a defective heart. Such device is generally used when the electrical activity of the heart is deficient and consists in applying electrodes on several points at the surface of the heart. By optimizing the positions of these electrodes with respect to a well chosen criteria, we show the significant gain that can be achieved with this technique compared to a less systematic positioning. Categories and Subject Descriptors J.3 [Computer Applications]: Life and medical sciences General Terms Algorithms Keywords optimization, heart, electrical activity 1. INTRODUCTION The heart is located between the lungs and consists of four parts, the right and left atria and ventricles. The function of the heart involves pumping blood from the lung and the body and ejecting it towards the body allowing the organs to operate. This function is the result of a contraction–relaxation process induced by an electrical impulse moving across the heart. The electrical signal is first induced in the sinus node, the natural pacemaker, then propagates through the atria and reaches the ventricles through the atrioventricular (A-V) node, see Figure 1\(^1\). In the ventricles, the propagation is led by the bundle of His causing a wavefront which propagates by a cell-to-cell activation. In each cell, a depolarization phase occurs corresponding to the inflow of sodium ions (causing the electrical activation) followed by a plateau phase, and then by a repolarization phase corresponding to the outflow of potassium ions. This phenomenon is illustrated in Figure 1 by the representation of the potential action in different types of cardiac cells. The electrical activity of the cell membranes is generally modelled by the so-called bidomain equations [1] in which the current term due to ionic exchanges can be modelled by the FitzHugh–Nagumo model [2, 6]. The electrical conduction of heart may be defective causing the heartbeat to be too fast, too slow or irregular. Some pathologies, as for example sinus node dysfunction or bundle branch block are treated with an artificial pacemaker which is used to help the heart to recover a quasi–normal electrical activity. A pacemaker consists of a small battery and electrodes transmitting the electrical impulse. Though today pacemakers give good results, certain questions still arise. How many electrodes should be set? Where the electrodes should be placed? When the electrodes should act? Many experiments are led to give answers to these questions, see [7] and references therein. As experimental measurements are difficult to obtain, numerical simulations may contribute to a better understanding. Our aim in this paper is to determine the optimal positioning of electrodes of a pacemaker on a disease heart. This can be interpreted as an inverse type optimization problem which can be solved with optimization tools such as Genetic Algorithms. Already used in many other medical applications, cite for instance in the heart domain the classification of ischemic beats [5], Genetic Algorithms are well adapted \(^1\)Figure from Bembook: http://butler.cc.tut.fi/malmivuo/bem/bembook when the cost function is not smooth or forms a complex simulation, as it is the case here. The paper is organized as follows. In Section 2 we present the bidomain/Fitzhugh–Nagumo model used to perform the numerical simulation of the cardiac electrical activity. Section 3 is devoted to the optimization description and in Section 4 and 5, we present and discuss some numerical results on a simplified test case representative of a left bundle branch block in a modelled human heart. We end the paper with some conclusions in Section 6. 2. MODELLING OF THE HEART ELECTRICAL ACTIVITY 2.1 The bidomain model At the microscopic level, the cardiac muscle, denoted by $\Omega_H$, is made of two distinct and intricate media: the intra and extra-cellular media, respectively called $\Omega_{Hi}$ and $\Omega_{He}$, that are separated by a surface membrane $\Gamma_H$ (see Figure 2). ![Figure 2: Simplified view of the heart at macro/microscopic level.](image) After a homogenization process, the corresponding electrical potentials $\phi_i$ and $\phi_e$ and the transmembrane potential $$V_m(t, x) = \phi_i(t, x) - \phi_e(t, x)$$ are defined on the entire domain $x \in \Omega_H$ and satisfy the so-called bidomain model [1], on $[0, T] \times \Omega_H$: $$A_m \left( C_m \partial_t V_m + I_{\text{ion}} \right) - \text{div}(\sigma_i \nabla V_m) = \text{div}(\sigma_i \nabla \phi_e), \quad (2)$$ $$\text{div}(\sigma_i \nabla \phi_i) = -\text{div}(\sigma_e \nabla \phi_e), \quad (3)$$ with the following boundary condition on the heart boundary $\partial \Omega_H$: $$\sigma_i \nabla \phi_i \cdot n = \sigma_e \nabla \phi_e \cdot n = 0,$$ where $n$ denotes the outward unit normal at $x \in \partial \Omega_H$. Finally an initial condition is prescribed: $$V_m(0, x) = V^0_m(x) \quad \text{in } \Omega_H.$$ The current term due to ionic exchanges, $I_{\text{ion}}$, is evaluated with the help of the simple but non-physiological Fitzhugh-Nagumo model [2, 6]: $$I_{\text{ion}} = -\frac{1}{\epsilon}(- (V_m - V_r)(V_m - V_s)(V_m - V_a) - u), \quad (6)$$ where the auxiliary variable $u$ satisfy the following ODE: $$\frac{du}{dt} = k(V_m - V_r) - u, \quad (7)$$ and $V_r < V_s < V_a$ respectively represent the potential at rest, the threshold and the activity potential, $\epsilon$ and $k$ are positive coefficients. 2.2 Pathologic case The pathology we consider here is called left bundle branch block. In such situation, the electric signal can not be propagated by the bundle of His in the left ventricle, consequently the depolarization process occurs with delay causing asynchronous contraction–decontraction. In the previous bidomain model, it is simulated by an absence of initial natural stimulation in the left ventricle in equation (5). In order to help the heart to recover its normal electrical activity, a well known surgery device, called pacemaker, is used. It acts through the application of a certain number of electrodes located at the heart surface that are able to give a local electrical impulse. In the previous bidomain model, the electrodes act like a local (in space and time) current volumic source term in the right hand side of the equation (2). 3. OPTIMIZATION PRINCIPLES In order to improve the efficiency of a pacemaker, the idea is to optimize the positioning of its electrodes. An error-type cost function between the reference healthy case and the pathologic case with a given position of electrodes has to be defined. The optimization is then achieved by using Genetic Algorithms. 3.1 Definition of an appropriate cost function The first cost function that has been tested is the quadratic norm in space and time of the difference between the transmembrane potential $V_m$ of a disease heart with a given position of electrodes and its target value $V_{m,\text{target}}$ computed for the healthy case: $$J_1 = \int_0^T \int_{\Omega_H} |V_m - V_{m,\text{target}}|^2 dx dt.$$ Actually, this first and natural cost function does not give satisfactory results for two reasons. First, it is due to the fact that the electrical activity of electrodes will represent a major obstacle to make $V_m$ converge to $V_{m,\text{target}}$ on the whole domain in space and time. Moreover, the right criteria to recover a normal electrical activity is rather to reduce the delay of a characteristic depolarization time. A new and better cost function is thus introduced and is expressed as $$J_2 = t_d - t_{d,\text{target}}, \quad (9)$$ where $t_d$ represents the first time for which 95 per cent of the whole heart is depolarized: $$t_d = \inf \{ t \geq 0, \quad \text{Volume}(\Omega_t) \geq 0.95 \text{ Volume}(\Omega_H) \},$$ with: $$\Omega_t = \{ x \in \Omega_H, \quad V_m(t, x) > V_s \}.$$ As previously, $t_{d,\text{target}}$ denotes the same value for the corresponding healthy heart. ### 3.2 Optimization by Genetic Algorithms The cost functions $J_1$ or $J_2$ previously described are computed after solving a complex set of coupled PDE and ODE with strong three-dimensional effects. Moreover, due to the complexity of the heart geometry, they display a non-smooth behavior with many local minima. For all these reasons, the minimization of $J_1$ and $J_2$ is achieved by using evolutionary algorithms and more precisely Genetic Algorithms. Inspired from the Darwinian theory of evolution of species, Genetic Algorithms [3] have been applied in the last decade in various applicative domains including the biomedical field, ranging for instance from the aerodynamic optimization of a car shape [4], to the classification of ischemic heart beats [5]. In the present case, a classical real coded Genetic Algorithm is used to optimize the positioning of one or two electrodes of a pacemaker on the internal boundary surface of the heart, also called endocardium. A mapping from the endocardium or a part of it to a simple plane domain, for instance a rectangular domain of $\mathbb{R}^2$, has first been defined in order to simplify the parametric search space. The selection process used in the Genetic Algorithm is done with a proportionate roulette wheel with respective parts based on the rank of each element in the population. The crossover of two elements is obtained by a barycentric combination with random and independent coefficients in each coordinate whereas the mutation of one element is of non uniform type. Finally, a one-elitism principle is added in order to make sure to keep in the population the best element of the previous generation. ### 4. DESCRIPTION OF THE TEST CASE The simulations are performed on a simplified geometry which contains ventricles only, see Figure 3. ![Figure 3: A simplified heart geometry $\Omega_H$.](image) The domain, closed to a human heart, is analytically defined through its boundary, made of the union of four truncated ellipsoids: $$\left( \frac{x}{a_{i,L}} \right)^2 + \left( \frac{y}{b_{i,L}} \right)^2 + \left( \frac{z}{c_{i,L}} \right)^2 = 1, \quad \left( \frac{x}{a_L} \right)^2 + \left( \frac{y}{b_L} \right)^2 + \left( \frac{z}{c_L} \right)^2 = 1,$$ with $\{a_{i,L}, b_{i,L}, c_{i,L}, a_L, b_L, c_L\} = \{2.72, 2.72, 5.92, 4, 4, 7.2\}$ cm for the left ventricle internal and external boundary respectively, and $$\left( \frac{x}{a_{i,R}} \right)^2 + \left( \frac{y}{b_{i,R}} \right)^2 + \left( \frac{z}{c_{i,R}} \right)^2 = 1, \quad \left( \frac{x}{a_R} \right)^2 + \left( \frac{y}{b_R} \right)^2 + \left( \frac{z}{c_R} \right)^2 = 1,$$ with $\{a_{i,R}, b_{i,R}, c_{i,R}, a_R, b_R, c_R\} = \{7.36, 3.36, 6.2, 8, 4, 6.84\}$ cm for the right ventricle. All these ellipsoids are restricted to the half space $z \leq 2.75$. In a real surgical case, the electrodes can be placed in the atria and/or in the ventricles. As we only consider here the heart ventricles, we seek for the best positioning of the electrodes in the internal surface of the left ventricle. The chosen cost function to minimize, $J_2$, is defined in (9). Two optimization processes are presented in the next section, depending on the allowable number of electrodes, respectively one or two. Note the second computation has been achieved for comparative purposes with the first case, regardless of the surgical constraints to handle it. In the following section, the numerical results obtained on this test case using the optimization principles presented in Section 3 are described. ### 5. NUMERICAL RESULTS We choose the conductivities in (2) and (3) such that the anisotropy of the fibers in the myocardium are taken into account, namely $\sigma_i = \alpha_i^f(I - d_f \otimes d_f) + \alpha_i^e(I - d_f \otimes d_f)$ and $\sigma_e = \alpha_e^f(I - d_f \otimes d_f) + \alpha_e^e(I - d_f \otimes d_f)$, where $d_f$ is the direction of the fibers, $I$ the identity matrix in $\mathbb{R}^3$ and $\alpha_i^f = 5 \cdot 10^{-3}$, $\alpha_i^e = 1.5 \cdot 10^{-1}$, $\alpha_e^f = 1.10^{-1}$ and $\alpha_e^e = 7.5 \cdot 10^{-3}$. The parameters in (2)–(7) are choosen as follows: $A_m = C_m = 1$, $V_r = 0$ mV, $V_s = 0.5$ mV, $V_a = 1$ mV, $\epsilon = 3.2 \cdot 10^{-3}$ and $k = 2.5 \cdot 10^{-2}$. The intensity of the initial stimulation equals 0.5 mV during 10 ms. The artificial stimulations have the same intensity as the initial stimulation and hold during 40 ms. As we are interested in the depolarization phase only, the final time of computations is actually equal to 100 ms whereas the total duration of depolarization–repolarization process is 300 ms. The domain $\Omega_H$ is discretized with tetrahedra for a total number of nodes equal to 12921. The ionic current is solved by the cvode\(^2\) solver, an appropriate solver for stiff nonlinear systems of ODE. The bidomain problem (2)–(5) is approximated by a piecewise finite elements scheme in space and by a second order backward differences scheme in time with a time step equals to 0.5 ms. The simulations are done with the C++ library LifeV\(^3\). We take 40, respectively 80, individuals in the GA population for the optimization of the positioning of one, respectively two electrodes. In both cases, the crossover probability and the mutation probability are respectively chosen \(^2\)http://llnl.gov/casc/sundials \(^3\)http://www.lifev.org/ equal to 0.9 and 0.6. A number of 10 generations is then needed to achieve a near optimal solution. A very good reproducibility for the obtained optimal solution, has been observed after doing a large number of GA runs (more than 10). The optimal positions that are given below correspond to the mean values after all these runs but can also be obtained after any single run. On the contrary, the convergence history of the GA, not plotted here, can be different from one run to another depending on the quality of the first random generation. In presence of one electrode, the mean optimal positioning correspond to the value \[(x, y, z) = (2.54, -0.024, 2.12),\] and in presence of two electrodes, the mean optimal positioning is: \[(x_1, y_1, z_1) = (1.64, 2.06, -1.47)\] \[(x_2, y_2, z_2) = (1.60, -2.08, -1.59).\] Note that in the second case, the two electrodes are positioned in a symmetric way with respect to \(y = 0\), which is not surprising remembering the analytic description of the heart surface boundary. Another interesting observation for clinical purposes is that all the optimal positions are localized in the opposite side of the bundle of His. In the case of a healthy heart, we obtain \(t_{d, \text{target}} = 28.5\) ms which means that at this time, 95% of the cells are depolarized whereas in the pathologic case with no electrodes, only 52.4% are depolarized at this time and \(t_d = 98\) ms. Figure 4 and 5 respectively show the wavefront at \(t_{d, \text{target}}\) for the healthy case, the pathologic case, and the pathologic case treated with one and two optimally located electrodes. In presence of one electrode the minimal value of \(t_d\) reduces to 45 ms, namely \(J_2 = 16.5\) ms. Note in this case that at \(t_{d, \text{target}}\), 71.5% of cells are depolarized. In presence of two electrodes the minimal obtained value is \(t_d = 36.5\) ms, namely \(J_2 = 8\) ms. In this case, 80.6% of cells are depolarized at \(t_{d, \text{target}}\). The corresponding isolines of \(V_m\) depicted on Figure 4 and 5 at \(t_{d, \text{target}}\) clearly corroborate these observations and show that the presence of one or two well positioned electrodes reduce the delay in the depolarization of the whole heart. When one electrode acts, the optimal value of \(J_2\) namely 16.5 ms, has to be compared with possible values ranging between 60 and 70 ms when the electrode positioning is done randomly. Similarly, when two electrodes act, the best value of the cost function $J_2$ after the optimization is equal to 13 ms but this function can reach values higher than 50 ms for a random positioning. This last observation can be summed up by saying that an optimal positioning of electrodes (either 1 or 2) can reduce by a factor up to 3 the delay in the characteristic depolarization time compared to a random positioning. ![Potential profiles at a given point](image) **Figure 6:** Comparison of potential profiles at a given point in the healthy, pathologic, pathologic with one and two electrodes cases. Figure 6 gives the potential profiles at the particular point $(x, y, z) = (-2.04, -0.16, -4.19)$ in the healthy and pathologic cases and in presence of one and two optimally located electrodes. We can observe the delay in the activation of the potential at that point and the effect of the two electrodes in bridging this delay. The effect of the two electrodes is observed during the depolarization phase (between 30 ms and 100 ms, depending on the case), and also during the repolarization phase (between 175 ms and 220 ms). At this particular point, the gain obtained when passing from one to two electrodes is rather significant. Finally, it is interesting to observe that the choice of the current cost function $J_2$ is also efficient in order to recover a good electrocardiogram. The complete computation of the ECG comes from a coupling, not detailed here, see [8], between the previous bidomain model and a torso domain, considered as a passive conductor. ### 6. CONCLUSIONS In this paper we have considered the problem of positioning the electrodes of a pacemaker in a disease heart. To achieve it efficiently, we have proposed a numerical approach based on the use of a cost function linked to the depolarization of the heart cells, which is the major process as it controls the contraction of the heart. The problem can then be treated as an inverse optimization problem and has been solved here by using a Genetic Algorithm. Numerical results clearly show the large influence of the positioning of one or more electrodes on the quality of the electrical activity recovery of the heart and consequently, the crucial need to do the electrode positioning on a systematic way rather than doing it randomly. ### 6.1 Acknowledgments The authors would like to thank J.F. Gerbeau, M.A. Fernández and M. Boulakia from INRIA REO team for their fruitful discussions. ### 7. REFERENCES [1] Henriquez C.S. Simulating the electrical behavior of cardiac tissue using the bidomain model. *Critical Reviews in Biomedical Engineering*, 21(1):1–77, 1993. [2] R. Fitzhugh. Impulses and physiological states in theoretical models of nerve membrane. *Biophys. J.*, 1:445–465, 1961. [3] Goldberg D.E. Genetic algorithms in search, optimization, and machine learning. *Addison-Wesley*, 1989. [4] L. Dumas, V. Herbert, and F. Muyl. Comparison of global optimization methods for drag reduction in the automotive industry. *Lecture Notes in Computer Science*, 3483:948–957, 2005. [5] Y. Goletsis, C. Papaloukas, D.I. Fotiadis, A. Likas, and L.K. Michalis. Automated ischemic beat classification using genetic algorithms and multicriteria decision analysis. *IEEE transactions on Biomedical Engineering*, 2004. [6] J.S. Nagumo, S. Arimoto, and S. Yoshizawa. An active pulse transmission line stimulating nerve axon. *Proc. IRE*, 50:2061–2071, 1962. [7] M. Peñicka, J. Bartuneck, B. De Bruyne, M. Vanderheyden, M. Goethals, M. De Zutter, P. Brugada, and P. Geelen. Improvement of left ventricular function after cardiac resynchronisation therapy is predicted by tissue doppler imaging echocardiography. *Journal of the american heart association*, 2004. [8] M. Boulakia, M.A. Fernández, J.-F. Gerbeau, and N. Zemzemi. Towards the numerical simulation of electrocardiograms. *Functional Imaging and Modeling of the Heart, Lecture Notes in Computer Science, Springer-Verlag* (4466):240–249, 2007.
Exciton Dissociation and Stark Effect in the Carbon Nanotube Photocurrent Spectrum Aditya D. Mohite, Prasanth Gopinath, Hemant M. Shah, and Bruce W. Alphenaar* Department of Electrical and Computer Engineering, University of Louisville, Louisville, Kentucky 40292 Received September 4, 2007; Revised Manuscript Received November 2, 2007 ABSTRACT The field-dependent photocurrent spectrum of individual carbon nanotubes is measured using a displacement photocurrent technique. A series of peaks is observed in the photocurrent corresponding to both excitonic and free carrier transitions. The photocurrent peak corresponding to the ground state exciton increases by a factor of 200 beyond a critical electric field, and shows both red and blue shifts depending on the field regime. This provides evidence for field-induced mixing between excitonic and free carrier states. Quantum confinement and electron–electron interactions produce a unique set of optical phenomenon in semiconductor carbon nanotubes not observed in three-dimensional semiconductors. Carbon nanotubes have a series of van Hove singularities in the electronic density of states that allow for optical excitation across a spectrum of transition energies. The band-to-band transition energies are dependent on the nanotube diameter and chirality, and have been used for nanotube identification. For the most part, however, optical absorption in nanotubes is not expected to occur via band-to-band transitions, but through the creation of bound electron hole pairs, or excitons. This surprising prediction has been confirmed experimentally through the observation of phonon sidebands, two-photon luminescence spectroscopy, and later through photocurrent spectroscopy. The oscillator strength for exciton formation is typically many times higher than that for the formation of free electron hole pairs. The excitonic optical transition energy differs from the band-to-band transition energy by the exciton binding energy, which can be as high as 500 meV. Recently, some effort has gone toward understanding the influence of electric field on the nanotube optical response. In three-dimensional semiconductors, electric field is known to cause a Stark shift in the absorption maximum and a modulation in the absorption coefficient. For nanotubes, Perebeinos et al. predicted a strong modulation of the absorption spectra with increasing electric field. The exciton formation rate due to impact ionization is also exponentially dependent on the electric field, and increases dramatically for potentials above the optical phonon energy. Importantly, the electric field also provides a mechanism by which the excitonic states can be disassociated into free carriers (similar to the field-induced ionization observed in atomic systems). Field-induced exciton dissociation should have a measurable effect on the nanotube photocurrent. At zero electric field, bound charge carriers cannot contribute to the photocurrent unless they disassociate into a free carrier state. Electric field provides a dissociation mechanism that effectively “turns-on” the ground-state excitonic transition in the photocurrent spectrum. In this way, free and bound charge transitions in the optical spectrum can be distinguished, and the influence of electric field on either type of transition can be explored. Here, we describe the results of an innovative photocurrent measurement technique that allows measurement of the excitation spectrum of individual nanotubes while applying large electric fields. Figure 1 shows our measurement setup. Individual single-wall nanotubes (SWNTs) are grown by chemical vapor deposition on an oxidized p+ silicon substrate (oxide thickness is 100 nm). Atomic force microscope imaging (Figure 1b) shows that the nanotube density is 3–6 SWNTs per 25 × 25 μm² area with an average nanotube diameter of 1.3 nm. A 25 nm thick layer of indium tin oxide (ITO) is deposited by electron beam evaporation, creating a transparent Schottky contact to the nanotubes. As shown in Figure 1a, the final device structure is a capacitor with a heavily doped silicon back electrode, a silicon dioxide dielectric, and an ITO top electrode. Applying a dc bias across the capacitor creates an electric field F perpendicular to the nanotube axis. We emphasize that we do not apply any bias across the length of the nanotube (parallel to the nanotube axis) as is done in a standard photocurrent... built-in Schottky barrier potential. For a p-type nanotube, holes will drift into the ITO, while electrons will drift toward the oxide interface. The charge separation produces an ac displacement current across the ITO/Si capacitor, which can be measured with a lock-in amplifier, synched to the laser repetition rate. The displacement photocurrent signal requires optical excitation of charge carriers followed by physical separation of the excited charge. The technique is thus sensitive to optical excitations in which freely mobile charge carriers are created. Another advantage is that it is straightforward to characterize individual nanotubes by increasing the spacing between nanotubes on the sample surface so that it is larger than the laser spot size. Application of electric field increases the band-bending across the carbon nanotube and thereby increases the carrier capture efficiency. Because of the capacitor structure, it is possible to apply large electric fields without generating appreciable dark current. We can make a rough estimate of the electric field across the nanotube for an applied bias $V_{dc}$ by considering the nanotube as an insulator with a dielectric constant $\epsilon_{nt} = 3.3$ and thickness $T_{nt} = 1.3$ nm lying on the silicon dioxide insulator with dielectric constant $\epsilon_{ox} = 3.9$ and thickness $T_{ox} = 100$ nm. The electric field across the nanotube is then given by $$F_{nt} = \frac{V_{dc}\epsilon_{ox}}{(T_{nt}\epsilon_{ox} + T_{ox}\epsilon_{nt})} \quad (1)$$ For a maximum applied bias of 32 V, this gives $F_{nt} = 3.7 \times 10^8$ V/m and a voltage drop across the width of the nanotube of $\gamma V_{dc} = 0.48$ V, where $\gamma = 0.015$ is the fraction of the applied bias that drops across the nanotube. While this is only a rough estimate, it shows that band-bending large enough to influence charge transport across the nanotube/ITO interface can easily exist. There are a number of uncertainties in this equation, including $\epsilon_{nt}$, which can range from 2.6 to 3.3, and the exact nanotube diameter, which could be off by 0.1 nm or more. Figure 3a shows the photocurrent measured for a typical nanotube at the ITO/oxide interface with an applied bias of $V_{dc} = 20$ V. A number of peaks are observed as a function of laser excitation energy. The diameter of the laser spot is approximately 5 $\mu$m, so that, on average, the laser illuminates less than one nanotube. Consequently, the photocurrent is negligible for the vast majority of the sample area: to obtain the spectrum shown here, the laser spot is first painstakingly scanned along the sample surface until a nanotube is found. Evidence that we are measuring an individual nanotube is provided by the polarization dependence of the photocurrent, shown in Figure 3b. Each of the four main peaks observed in the photocurrent show strong polarization dependence, and are maximized at the same polarization angle. Comparison to absorbance spectroscopy measurements made on similarly prepared nanotube films shows that the two highest magnitude peaks in Figure 3a correspond to the lowest energy $E_{11}$, and next higher energy $E_{22}$ excitonic transitions in the absorbance spectrum. The photocurrent measured near the $E_{22}$ transition is shown in detail in Figure Figure 3. (a) Photocurrent versus excitation energy for an individual nanotube measured with 20 V dc bias across the capacitor. (b) Polarization angle dependence for the four largest photocurrent peaks. Figure 4. Photocurrent spectra versus excitation energy measured near the $E_{22}$ excitonic transition for bias ranging from 0 to 20 V. Curves are offset for clarity. For a series of dc biases ranging from 0 to 20 V (corresponding to electric fields across the nanotube ranging from 0 to 2.32 MV/m). A dominant peak is observed at around 1.28 eV, together with a satellite peak 185 meV above the main resonance. As has been described in the literature,\textsuperscript{7} a satellite peak can arise from exciton–phonon coupling of the optically active exciton with the dipole-forbidden dark exciton. The LO phonon mode associated with C–C bond stretching mixes much more strongly with the light and dark excitonic states than with the free carrier state. This provides further evidence that the main peak corresponds to an excitonic state. Both peaks show a red shift with increasing electric field. Figure 5b shows the main $E_{22}$ peak position as a function of electric field, and, as indicated by the solid line, a quadratic field dependence fits the data reasonably well. A quadratic field dependence is also observed in the quantum confined Stark effect for semiconductor quantum wells\textsuperscript{20,21} (field perpendicular to the well) and, for nanotubes, has been predicted for electric field parallel to the nanotube axis.\textsuperscript{13} The red shift is expected as long as mixing with the lowest energy band-to-band transition is not too strong, in which case a blue shift is possible. Figure 5a shows the magnitude of the photocurrent (normalized with respect to the zero field value) for the main resonance as a function of electric field. The photocurrent increases approximately linearly with electric field, suggesting there is no appreciable barrier for the transmission of the charge into the ITO contact following photoexcitation of the carriers. It is expected that the $E_{22}$ exciton decays rapidly into the lower energy continuum states where the charge can move freely through the sample.\textsuperscript{7} Assuming ohmic conduction, the slope of the field dependence in Figure 5a is $\alpha A/I_0$, where $\sigma$ is the transport conductivity, $A$ is the cross-sectional area of the photoexcited region of the nanotube, and $I_0 = 5.7$ pA is the peak height at zero applied field. (The built in potential at the nanotube/ITO interface $V_0$ allows optically excited carriers to travel into the contact, even at zero bias.) $A$ is given by the length of the nanotube illuminated by the laser (approximated by the laser spot diameter) multiplied by the nanotube diameter, or $(5 \times 10^{-6}$ m)$(1.3 \times 10^{-9}$ m) = $6.5 \times 10^{-15}$ m$^2$. This gives a conductivity of $\sigma = 8.25 \times 10^{-4}$ $\Omega^{-1}$ m$^{-1}$. A very different field dependence is observed in the $E_{11}$ excitonic regime. Figure 6a shows the photocurrent measured in the regime of the $E_{11}$ exciton for a range of applied biases. At low bias, only a single peak is observed near 0.88 eV. At higher bias, a second peak emerges near 0.61 eV. The Figure 6. (a) Photocurrent versus excitation energy measured near the $E_{11}$ exciton transition for bias ranging from 0 to 10 V. The curves are offset for clarity. The apparent splitting in the free carrier peak at 10 V is not reproduced in other devices. (b) Normalized photocurrent versus electric field of the excitonic and free carrier peaks. The magnitude of the lower energy peak increases with increasing bias and eventually overshadows the higher energy peak. Figure 6b shows the normalized photocurrent measured for both the lower and higher energy peaks. The higher energy peak changes little with applied field, while the lower energy peak shows a large increase. In addition, the lower energy peak is accompanied by a phonon satellite peak, which is approximately 185 meV higher in energy than the main peak position. (This is similar to the exciton—phonon satellite peak observed in the $E_{22}$ spectrum). This suggests that the lower energy peak is the $E_{11}$ excitonic state, while the higher energy peak is the ground-state free carrier transition. (This assignment also agrees with absorption measurements made on carbon nanotube films.\textsuperscript{10}) A similar peak structure was observed in four different semiconducting nanotubes. We can extract the $E_{11}$ exciton binding energy by taking the energy difference between the excitonic and band-to-band photocurrent peaks. For the spectra in Figure 6, this gives 0.274 eV, while binding energies ranged from 0.270 to 0.300 eV for the four nanotubes measured. These values agree with theoretical predictions, assuming a nanotube diameter of 1.3 nm and a dielectric constant $\epsilon_{\text{nt}} = 3.3$.\textsuperscript{11,13,22} The field dependence of the $E_{11}$ excitonic transition can be understood using a recently described field-enhanced tunneling model,\textsuperscript{10,23} which assumes a constant field across the width of the nanotube. As shown in Figure 2, the applied bias $V_{\text{dc}}$ brings the energies of the free carrier and bound carrier states into alignment. Bound carriers can then dissociate into free carriers by tunneling into the continuum through the barrier created by the exciton binding energy. Increasing the bias acts to reduce the tunnel barrier width and consequently increase the tunneling rate. In this case, the normalized photocurrent is given by $$I_f/I_0 = \exp\left[\frac{4}{3}\sqrt{\frac{2m^*}{q\hbar}}\frac{E_b^{3/2}T_{\text{nt}}}{V_0} - \frac{\gamma V_{\text{dc}}}{(V_0 + \gamma V_{\text{dc}})}\right] = \exp\left[\frac{a}{(1 + b/V_{\text{dc}})}\right]$$ where $a = 4/3\sqrt{2m^*/q\hbar}E_b^{3/2}t_f/V_0$ and $b = V_0/\gamma$. Equation 2 can be simplified further by plugging in for the exciton binding energy $E_b = 0.3$ eV (given by the energy separation between the excitonic and free carrier peaks in Figure 6a), the nanotube thickness $T_{\text{nt}} = 1.3$ nm, and the effective mass $m^*$ (taken to be the free electron mass $m_0$). This gives $a = 1.46$, $\gamma/V_0 = 1.46/b$, leaving eq 2 with only one fitting parameter, $b$. The solid line in Figure 6b is a fit of eq 2 to the experimental data for $b = 0.48$. It is seen that the tunneling model provides a good description of the $E_{11}$ exciton field dependence in the bias range 0–10 V (when plotting the data, bias has been converted to electric field, using eq 1). Using $b = 0.48$ from eq 2 and $\gamma = 0.015$ from eq 1 gives $V_0 = 7.3$ mV, which is not unreasonable for the built-in potential at the ITO/nanotube interface. For higher electric fields, the tunneling model predicts that the photocurrent should saturate; however, this behavior is not observed. Figure 7 shows the photocurrent spectra near the $E_{11}$ excitonic transition for biases ranging from 10 to 25 V. In Figure 8a, the magnitude of the excitonic peak is plotted as a function of electric field. At field strengths of 150 MV/m, the exciton peak increases by almost an order of magnitude. No similar increase is observed in the free carrier peak, which becomes dwarfed by the excitonic peak. The signal reaches a maximum at a field of 200 MV/m, and then decreases somewhat at higher fields. A clue to understanding this behavior is provided by the field dependence of the $E_{11}$ peak position, (shown in Figure 8b). For the low field range of 0–120 MV/m, the peak position shifts to lower energy and is well described by a quadratic field dependence. The $E_{11}$ and $E_{22}$ peaks show a similar field dependence in this regime; however, the $E_{11}$ Stark shift is substantially larger than the $E_{22}$ Stark shift (in Figure 8. (a) Normalized photocurrent and (b) excitonic peak position versus electric field for the $E_{11}$ excitonic peak in the high field regime. Quadratic fits to the peak position are shown. agreement with predictions for electric field parallel to the nanotube axis.\textsuperscript{13} At 120 MV/m, just when the peak magnitude is observed to increase sharply, the Stark shift changes direction from red to blue. A second semiconducting nanotube measured in the high field regime showed similar behavior. The data suggest that, at some critical electric field, increased mixing occurs between the excitonic and free carrier states. This would lead to a large increase in the photocurrent, as well as a changeover to a blue shift, as the Stark shift of the continuum states becomes increasingly dominant. Perebeinos et al. predicted mixing between the excitonic and free carrier states occurring for electric field parallel to the nanotube; however, the perpendicular field case that we measure here has yet to be analyzed theoretically. We also note that our data looks similar to results observed in field-enhanced ionization of atomic systems in which level crossings lead to a rapid increase and then decrease of the ionization rate. In conclusion, the influence of perpendicular electric field on the excitonic and free carrier transitions in carbon nanotubes is explored using a photocurrent technique. As observed in nanotube films, electric field causes the dissociation of excitons through field-assisted tunneling into the free carrier states. An order of magnitude increase in the excitonic peak is observed beyond a critical electric field, at which point the Stark shift also changes sign. This provides evidence for field-induced mixing of excitonic and free carrier states in carbon nanotubes. Acknowledgment. The authors thank R.W. Cohn and J. Kielkopf for valuable discussions. Funding provided by ONR (No. N00014-06-1-0228) and NASA (No. NCC 5-571). References (1) Dresselhaus, M.; Dresselhaus, G.; Avouris, P., Eds. \textit{Carbon Nanotubes: Synthesis, Structure, Properties and Applications}; Springer: Berlin, 2001. (2) Kataura, H.; Kumazawa, Y.; Maniwa, Y.; Umezu, I.; Suzuki, S.; Ohtsuka, Y.; Achiba, Y. \textit{Synth. Met.} \textbf{1999}, \textit{103}, 2555. (3) Ando, T. \textit{J. Phys. Soc. Jpn.} \textbf{1997}, \textit{66}, 1066. (4) Avouris, Ph. \textit{MRS Bull.} \textbf{2004}, \textit{29}, 403. (5) Spataru, C. D.; Ismail-Beigi, S.; Benedict, L. X.; Louie, S. G. \textit{Phys. Rev. Lett.} \textbf{2004}, \textit{92}, 077402. (6) Korovyanko, O. J.; Sheng, C.-X.; Vardeny, Z. V.; Dalton, A. B.; Baughman, R. H. \textit{Phys. Rev. Lett.} \textbf{2004}, \textit{92}, 174303. (7) Freitag, M.; Martin, Y.; Misewich, J. A.; Martel, R.; Avouris, Ph. \textit{Nano Lett.} \textbf{2003}, \textit{3}, 1067. (8) Wang, F.; Dukovic, G.; Brus, L. E.; Heinz, T. F. \textit{Science} \textbf{2005}, \textit{308}, 838. (9) Maultzsch, J.; Pomraenke, R.; Reich, S.; Chang, E.; Prezzi, D.; Ruini, A.; Molinari, E.; Strano, M.; Thomsen, C.; Lienau, C. \textit{Appl. Phys. Lett.} \textbf{2005}, \textit{72}, 241402. (10) Mohite, A.; Lin, J.-T.; Sumanasekera, G.; Alphenaar, B. W. \textit{Nano Lett.} \textbf{2006}, \textit{6} (7), 1369. (11) Perebeinos, V.; Tersoff, J.; Avouris, P. \textit{Phys. Rev. Lett.} \textbf{2004}, \textit{92}, 257402. (12) Keldysh, L. V. \textit{Zh. Eksp. Teor. Fiz.} \textbf{1958}, \textit{34}, 1138; \textit{Sov. Phys. JETP} \textbf{1958}, \textit{7}, 788. (13) Perebeinos, V.; Avouris, P. \textit{Nano Lett.} \textbf{2007}, \textit{7}, 609. (14) Perebeinos, V.; Avouris, P. \textit{Appl. Phys. Lett.} \textbf{2006}, \textit{74}, 121410. (15) Bosnick, K.; Gabor, N.; McEuen, P. \textit{Appl. Phys. Lett.} \textbf{2006}, \textit{89}, 163121. (16) Miller, L.; Kash, M.; Kleppner, D. \textit{Phys. Rev. Lett.} \textbf{1978}, \textit{41} (2), 103. (17) Mohite, A.; Sumanasekera, G. U.; Hirahara, K.; Bandow, S.; Iijima, S.; Alphenaar, B. W. \textit{Chem. Phys. Lett.} \textbf{2005}, \textit{412}, 190. (18) Mohite, A.; Gopinath, P.; Chakraborty, S.; Alphenaar, B. W. \textit{Appl. Phys. Lett.} \textbf{2005}, \textit{86}, 061114. (19) Vaddipaju, S.; Mohite, A.; Chin, A.; Meyyappan, M.; Sumanasekera, G. U.; Alphenaar, B. W.; Sunkara, M. K. \textit{Nano Lett.} \textbf{2005}, \textit{5}, 1625. (20) Miller, D. A. B.; Chemla, D. S.; Damen, T. C.; Gossard, A. C.; Wiegmann, W.; Wood, T. H.; Burrus, C. A. \textit{Phys. Rev. Lett.} \textbf{1984}, \textit{53}, 2173. (21) Miller, D. A. B.; Chemla, D. S.; Schmitt-Rink, S. \textit{Phys. Rev. B} \textbf{1983}, \textit{33}, 6976. (22) Dukovic, G.; Wang, F.; Song, D.; Sfeir, M. Y.; Heinz, T. F.; Brus, L. E. \textit{Nano Lett.} \textbf{2005}, \textit{5} (11), 2314. (23) Moses, D.; Wang, J.; Heeger, A. J.; Kirova, N.; Brazovski, S. \textit{Proc. Natl. Acad. Sci. U.S.A.} \textbf{2001}, \textit{98}, 13496.
Energy and Electron Transfer in Enhanced Two-Photon-Absorbing Systems with Triplet Cores Olga S. Finikova, Thomas Troxler, Alessandro Senes, William F. DeGrado, Robin M. Hochstrasser, and Sergei A. Vinogradov *J. Phys. Chem. A*, **2007**, 111 (30), 6977-6990 • DOI: 10.1021/jp071586f Downloaded from http://pubs.acs.org on January 8, 2009 More About This Article Additional resources and features associated with this article are available within the HTML version: - Supporting Information - Links to the 7 articles that cite this article, as of the time of this article download - Access to high resolution figures - Links to articles and content related to this article - Copyright permission to reproduce figures and/or text from this article View the Full Text HTML Energy and Electron Transfer in Enhanced Two-Photon-Absorbing Systems with Triplet Cores Olga S. Finikova,† Thomas Troxler,‡ Alessandro Senes,† William F. DeGrado,† Robin M. Hochstrasser,‡ and Sergei A. Vinogradov*,† Departments of Biochemistry and Biophysics and Chemistry, University of Pennsylvania, Philadelphia, Pennsylvania 19104 Received: February 26, 2007; In Final Form: May 14, 2007 Enhanced two-photon-absorbing (2PA) systems with triplet cores are currently under scrutiny for several biomedical applications, including photodynamic therapy (PDT) and two-photon microscopy of oxygen. The performance of so far developed molecules, however, is substantially below expected. In this study we take a detailed look at the processes occurring in these systems and propose ways to improve their performance. We focus on the interchromophore distance tuning as a means for optimization of two-photon sensors for oxygen. In these constructs, energy transfer from several 2PA chromophores is used to enhance the effective 2PA cross section of phosphorescent metalloporphyrins. Previous studies have indicated that intramolecular electron transfer (ET) can act as an effective quencher of phosphorescence, decreasing the overall sensor efficiency. We studied the interplay between 2PA, energy transfer, electron transfer, and phosphorescence emission using Rhodamine B–Pt tetrazenzoporphyrin (RhB–PtTBP) adducts as model compounds. 2PA cross sections ($\sigma_2$) of tetrazenzoporphyrins (TBPs) are in the range of several tens of GM units (near 800 nm), making TBPs superior 2PA chromophores compared to regular porphyrins ($\sigma_2$ values typically 1–2 GM). Relatively large 2PA cross sections of rhodamines (about 200 GM in 800–850 nm range) and their high photostabilities make them good candidates as 2PA antennae. Fluorescence of Rhodamine B ($\lambda_{\text{ex}} = 590$ nm, $\phi_{\text{fl}} = 0.5$ in EtOH) overlaps with the Q-band of phosphorescent PtTBP ($\lambda_{\text{abs}} = 615$ nm, $\epsilon = 98\,000$ M$^{-1}$ cm$^{-1}$, $\phi_{\text{p}} \sim 0.1$), suggesting that a significant amplification of the 2PA-induced phosphorescence via fluorescence resonance energy transfer (FRET) might occur. However, most of the excitation energy in RhB–PtTBP assemblies is consumed in several intramolecular ET processes. By installing rigid nonconducting decaproline spacers (Pro$_{10}$) between RhB and PtTBP, the intramolecular ETs were suppressed, while the chromophores were kept within the Förster $r_0$ distance in order to maintain high FRET efficiency. The resulting assemblies exhibit linear amplification of their 2PA-induced phosphorescence upon increase in the number of 2PA antenna chromophores and show high oxygen sensitivity. We also have found that PtTBPs possess unexpectedly strong forbidden $S_0 \rightarrow T_1$ bands ($\lambda_{\text{max}} = 762$ nm, $\epsilon = 120$ M$^{-1}$ cm$^{-1}$). The latter may overlap with the laser spectrum and lead to unwanted linear excitation. Introduction Two-photon laser scanning microscopy (2P LSM), pioneered by Denk and Webb in 1990, has become one of the most popular tools in modern neuroscience and cellular research. 2P LSM is based on the multiphoton-absorption phenomenon, which presents considerable interest for such applications as high-density data storage, optical limiting, and photodynamic therapy (PDT), attracting more and more chemists to the search for new multiphoton-absorbing materials. Typically, two-photon-absorption (2PA) cross sections ($\sigma_2$) of commonly used organic dyes are small and only in a few cases (e.g., rhodamines) reach moderate values, e.g., hundreds of Göppert-Mayer (GM) units. A number of systems with enhanced 2PA cross sections have been proposed in recent years, and rational ways of designing 2PA molecules are being developed. Although large $\sigma_2$ values are generally desirable for all 2PA optical probes, imaging with fluorescent agents, which typically possess high quantum yields, can be accomplished even when they exhibit low 2PA cross sections. On the contrary, for emitters with intrinsically low quantum yields and/or long excited-state lifetimes, such as phosphorescent probes, amplification of 2PA is an absolute requirement. Phosphorescent probes are useful for biological measurements because their long lifetimes make them extremely sensitive to a variety of quenching processes. One such process involves oxygen—a key component of the biological energy metabolism. Oxygen sensing in vivo by phosphorescence is a technology with many potential uses in physiological and medical research, including applications in microscopy and imaging. Combining phosphorescence quenching with 2P LSM would provide a new technique for high-resolution imaging of oxygen with intrinsic three-dimensional capability—a useful tool for studying neuronal activation, evaluating heterogeneity of hypoxia in tumors, and monitoring metabolic processes inside living cells. Phosphorescent probes for biological oxygen sensing are usually based on Pd or Pt porphyrins, whose intersystem crossing rates and phosphorescence quantum yields are high and whose submillisecond triplet lifetimes ensure their excellent oxygen sensitivity.\textsuperscript{20} Unfortunately, 2PA cross sections of metalloporphyrins and other useful phosphorescent dyes, such as Ru\textsuperscript{II+}(bpy)\textsubscript{3} and similar complexes,\textsuperscript{17} are typically very low, no more than several GM units.\textsuperscript{21,22} In centrosymmetrical molecules, selection rules for one-photon (1P) and 2P transitions are mutually exclusive, and excited states corresponding to the strongly allowed Soret and Q-band transitions of metalloporphyrins have correspondingly weak 2PA cross sections.\textsuperscript{23} Recently, much attention has been focused on tetrapyrroles with increased 2PA cross sections. 2PA of porphyrins can be increased by asymmetrical substitution\textsuperscript{24} or through porphyrin conjugation into oligomers and arrays.\textsuperscript{10} Some of these new materials appear to be effective in singlet oxygen sensitization; however, no explicit data have been reported on their triplet quantum yields. Similarly, no data on phosphorescence of 2PA porphyrins have been published, and it is not clear how perturbation of the porphyrin electronic system would affect the emissivity of its triplet states. In addition, some data suggest that increase in the length of porphyrin arrays causes a decrease in their intersystem crossing yields and hampers triplet production.\textsuperscript{25} An alternative approach to amplification of 2PA signals from porphyrins without directly altering their electronic properties has recently been proposed as a means of constructing phosphorescent oxygen sensors\textsuperscript{26} and 2PA PDT agents.\textsuperscript{27} The idea behind this approach is to harvest the excitation energy by an electronically separate 2PA antenna, which then would pass the excitation to the porphyrin via intramolecular Förster-type resonance energy transfer (FRET).\textsuperscript{28} Intersystem crossing (isc) within the porphyrin then generates the triplet state, which decays back to the ground state by either emitting a photon (phosphorescence) or sensitizing oxygen. In phosphorescent sensors, the rate of oxygen diffusion to the core is regulated by dendritic encapsulation, while the dendrimer termini control the probe’s biodistribution.\textsuperscript{18c,29} Several model compounds have been constructed in order to evaluate this design, proving that the approach is feasible and promising. However, certain difficulties have been identified as well. First of all, amplification of the core function—whether it is emission or singlet oxygen sensitization—did not appear to scale linearly with the 2PA cross section of the antenna. Instead, structures with built-in enhancement pathways showed lower performance than theoretically expected from their estimated 2PA action cross sections. Second and more importantly, electron transfer (ET) between the antenna and the triplet-state core was identified as an unwanted but extremely effective route for the triplet quenching. Preventing the ET between the core and the 2PA antenna would either require chromophores incapable of electron exchange or, more realistically, placing the chromophores at such a distance from one another that the ET would be diminished, while the FRET would be maintained at its highest possible rate. Such distance tuning should be possible because of the difference in the rate dependences for the Förster-type energy transfer\textsuperscript{30,31} and electron transfer.\textsuperscript{32,33} Positioning chromophores at optimal distances, carefully selected during the course of evolution, is the key to the outstanding performance of natural photosynthetic systems, which employ combinations of energy- and electron-transfer reactions for energy transduction and conservation.\textsuperscript{34} Not surprisingly, these processes have been studied by many researchers using a variety of synthetic models, and the literature covering this subject is very extensive.\textsuperscript{32,35,36} In contrast, there are relatively few reports on the 2PA-induced FRET\textsuperscript{27,37,38} and none, to our knowledge, on the combination of 2PA, FRET, and ET. In this paper we studied model 2PA antenna–triplet core dyads to identify undesirable photoinduced ET processes and to evaluate their distance dependence. We then synthesized assemblies in which the antenna and the core were separated by nonconducting polyproline linkers at distances where the ETs were prevented but the energy transfer remained highly efficient. Finally, enhancement of the core 2PA-induced phosphorescence as a function of the number of antenna chromophores was measured in femtosecond regime. As a result, guidelines for design of optimized antenna–triplet core 2PA systems for biological applications were developed. **Results and Discussion** **Functional Components of the Device.** The first reported 2PA-amplified phosphorescent sensors consisted of Pt \textit{meso}-tetraarylporphyrins (PtP) as triplet cores and commercial Coumarin-343 (C343) as 2PA antennae.\textsuperscript{26} The FRET from C343 onto PtP was efficient (\(\sim 80\%\)), and upon femtosecond (fs) excitation in the region of 800 nm an increase in the core phosphorescence was readily observed. However, amplification in 2PA-induced phosphorescence for all PtP–C343 systems was modest, in part due to the relatively low 2PA cross section of C343 (\(\sigma_2 \sim 20\) GM). In this study we turned our attention to rhodamines, which have 2PA cross sections of about 200 GM in the 840 nm range\textsuperscript{3a,39} and potentially can perform as effective 2PA antennae. Rhodamine B (RhB, Figure 1) fluoresces near \(\lambda_{\text{max}} \approx 590\) nm with the quantum yield \(q_{\text{fl}} = 0.5\)\textsuperscript{40,41} and is known for its high photostability. Functionalized derivatives of RhB are readily accessible via the recently reported chemistry.\textsuperscript{42} To match the emission of RhB for the most efficient transfer of the excitation energy, we chose Pt \textit{meso}-tetraaryltetrazenzoporphyrim (PtTBP, Figure 1) to be the triplet cores. Pt and Pd tetrazenzoporphyrim exhibit strong phosphorescence at ambient temperatures,\textsuperscript{20d,43} and their NIR absorption bands warrant their use as probes for \textit{in vivo} oxygen imaging.\textsuperscript{19c,d,29c} A versatile method of synthesis of \(\pi\)-extended porphyrins has recently been developed,\textsuperscript{44} making it possible to place various functional groups on the TBP macrocycle. The absorption Q-band of PtTBP 2 (\(\lambda_{\text{max}} = 615\) nm, \(\epsilon = 98\,000\) M\(^{-1}\) cm\(^{-1}\)) overlaps significantly with the fluorescence of RhB (Figure 1), suggesting efficient FRET between these two molecules. Assuming random orientation of transition dipoles (\(\kappa^2 = 2/3\)) and a refractive index of 1.36 (EtOH), the Förster distance \(r_0\) for the RhB–PtTBP pair was estimated to be 53 Å.\textsuperscript{45} An important property of \textit{meso}-tetraaryltetrazenzoporphyrim, especially in the view of the present application, is their high nonplanarity. TBPs and their metal derivatives possess severely saddled structures.\textsuperscript{4b,46} As a result, their ground-state wave functions lack centers of symmetry, and that should cause an increase in their 2PA cross sections. Some data in the literature indeed mention relatively large \(\sigma_2\) values for TBPs, e.g., about 90 GM (\(\lambda_{\text{ex}} = 800\) nm) for Zn \textit{meso}-tetraaryltetrazenzoporphyrim.\textsuperscript{21b} Interestingly, in spite of strong nonplanarity, TBPs and other \(\pi\)-extended porphyrins exhibit high emission quantum yields,\textsuperscript{20d,29c,43,44} while other nonplanar porphyrins are practically nonemissive, as a result of enhanced internal conversion.\textsuperscript{47} **RhB–PtTBP Adducts with Short Spacers.** The simplest bichromophoric assemblies studied in this work were RhB–PtTBP adducts, in which the antenna (RhB) and the core Figure 1. (left) Functional components of the studied assemblies: *meso*-tetraaryltetrabenzoporphyrins (TBP, 1–2b); Rhodamine B (RhB) and its functionalized derivatives (3a and 4). (right) Absorption and emission spectra of PtTBP (2) and of 3a in EtOH. For absorption spectra, the molar ratio of 2 and 3a is 1:1. For emission, solution of 2 was purged with Ar; ordinate values are in arbitrary units. TABLE 1: Photophysical Data for Compounds Described in This Paper | no. | solvent | $\lambda_{\text{abs}}$, nm (log e) | $\lambda_{\text{em}}$, nm ($\lambda_{\text{ex}}$, nm), em type | $\phi^b$ | $\phi_{\text{FRET}}$ | $\tau$ (lifetime) | $\alpha_2$, c GM | |-----|-------------|-----------------------------------|---------------------------------------------------------------|---------|----------------------|------------------|-----------------| | 1a | DMF | 614 (5.02) | 772 (615), p | 0.092 | — | 33 $\mu$s | (28)d | | | EtOH | 611 (5.02) | 771 (615), p | 0.083 | | | | | | CH$_2$Cl$_2$| 614 (5.02) | 774 (615), p | 0.10 | | | | | 2b | DMF | 617 (5.03) | 783 (615), p | 0.074 | — | 31 $\mu$s | (28)d | | 3a | EtOH | 563 (4.98) | 587 (520), f | 0.21 | — | 1.85 ns | 200f | | | DMF | 565 (4.98) | 588 (520), f | 0.26 | | | | | | CH$_2$Cl$_2$| 563 (4.98) | 582 (520), f | 0.33 | | | | | 4 | H$_2$O/EtOH, 1:1 | 566 (—) | 587 (520), f | 0.20 | — | — | — | | 5 | EtOH | 563 (5.02) | 770 (611), p | <0.001 | 0.21 | <1 $\mu$s | — | | | | 611 (5.02) | 584 (520), f | 0.004 | | | | | 6 | EtOH | 563 (5.02) | 770 (611), p | 0.001 | 0.83 | <1 $\mu$s | — | | | | 611 (5.02) | 584 (520), f | 0.007 | | | | | 7 | DMF | 565 (4.98) | 587 (520), f | 0.27 | — | — | — | | 8 | EtOH | 563 (5.02) | 770 (611), p | 0.052 | 0.84 | 29 $\mu$s | 224 | | | | 611 (5.02) | 584 (520), f | 0.028 | | 0.33 ns | | | | DMF | 565 (5.02) | 772 (611), p | 0.052 | 0.84 | | | | | | 614 (5.02) | 588 (520), f | 0.028 | | | | | | CH$_2$Cl$_2$| 563 (5.02) | 773 (611), p | 0.035 | 0.84 | | | | | | 613 (5.02) | 580 (520), f | 0.026 | | | | | 9 | DMF | 565 (5.58) | 776 (615), p | 0.028 | 0.55 | 27 $\mu$s | (824) | | | | 616 (5.03) | 587 (520), f | 0.043 | | 0.44 ns | | | 10 | BSA, 1% aq | 646 (4.49) | 668 (611), f | 0.014 | — | 1.98 ns | 28 | a All measurements were performed in solvents deoxygenated by Ar bubbling. b Emission quantum yields were determined relative to the fluorescence of Rhodamine B in MeOH ($\phi_B = 0.5$). c Measured against this value, *meso*-tetraphenylporphyrin (H$_2$TPP), a common standard for porphyrin spectroscopy, exhibits the absolute fluorescence quantum yield $\phi_B = 0.048$ in deoxygenated benzene, as opposed to the generally accepted value $\phi_B = 0.11$. To transform the numbers in the table into the numbers relative to the quantum yield of H$_2$TPP ($\phi_B = 0.11$), they should be multiplied by 2.29. d $\alpha_2$ values in parentheses are estimated (see text for details). e Reference 8a. (PtTBP) were present in 1:1 ratio and connected via short nonconjugated linkers. Two such molecules, 5 and 6, are shown in Scheme 1, and detailed description of their synthesis can be found in the Supporting Information. Isolation and handling of 5 proved difficult. This compound degrades rapidly in concentrated solutions at ambient temperatures and decomposes, although slowly, even when shielded from ambient light. Pure 5 could be preserved by freezing its solutions immediately after chromatography. Adduct 6 turned out to be more stable than 5, although its partial decomposition was revealed by the decrease in its RhB absorption when the compound was exposed to ambient light or handled at elevated temperatures. A. Spectroscopy. The photophysical data for all the compounds described in this paper are summarized in Table 1. The absorption spectra of adducts 5 and 6 and of an equimolar mixture of reference compounds 1a and 3a are nearly identical, suggesting no interactions between the chromophores in their ground states (Figure 2A). SCHEME 1: RhB–PtTBP Adducts with Short Spacers The emission spectra shown in Figure 2B were recorded upon excitation at $\lambda_{\text{ex}} = 520$ nm, where the absorption of PtTBP is low, whereas the absorption of RhB is significant. The intramolecular FRET between RhB and PtTBP in adducts 5 and 6 was expected to quench the fluorescence of RhB ($\lambda_{\text{max}} = 587$ nm), but at the same time amplify the phosphorescence of PtTBP ($\lambda_{\text{max}} = 770$ nm). Instead, both fluorescence and phosphorescence in 5 and 6 appeared to be almost entirely quenched (Figure 2B). Direct excitation into the porphyrin Q-band ($\lambda_{\text{ex}} = 611$ nm), bypassing the RhB absorption and the FRET, revealed that the phosphorescence quantum yields of 5 ($\phi_p < 0.001$) and 6 ($\phi_p = 0.001$) were decreased by 98–99% compared to the reference compound 1a ($\phi_p = 0.083$ in EtOH). Quenching due to intermolecular aggregation was ruled out since the replacement of solvents (CH$_2$Cl$_2$, DMF) and/or dilution of samples (10–100 times) had almost no effect on the phosphorescence, and no evidence of aggregation was observed in the absorption spectra (Figure 2A). Such strong attenuation of the emission could be explained only by the presence of intramolecular quenching pathways in RhB–PtTBP adducts. The most plausible mechanisms would involve RhB $\rightarrow$ PtTBP triplet–triplet energy transfer and intramolecular electron transfer(s) (ET), competing with the phosphorescence. Both intramolecular and intermolecular ET between porphyrins and xanthene dyes have been described in the literature, although, to the best of our knowledge, no studies involving phosphorescent porphyrins have been reported. On the other hand, literature data suggest that the T$_1$–S$_0$ gap for RhB is 1.86 eV (667 nm), whereas phosphorescence of PtTBP occurs at 770 nm (1.61 eV), making RhB $\rightarrow$ PtTBP triplet–triplet exchange energetically unfavorable. Thus, the ET between the excited PtTBP and RhB in its ground state is the most probable mechanism of the phosphorescence quenching. In principle, two such ETs are possible for RhB–PtTBP systems: one involving the PtTBP singlet state S$_1$ and competing with S$_1 \rightarrow$ T$_1$ intersystem crossing (isc) and another involving the PtTBP triplet state T$_1$ and competing with phosphorescence itself. Below, we refer to these ETs as ET$_{\text{PtTBP(S)}}$ and ET$_{\text{PtTBP(T)}}$, respectively. Assuming the pure Förster model and the distance between the reactants of 13 Å, the rate of the energy transfer in 5 was estimated to be $k_{\text{FRET}} = 2.5 \times 10^{12}$ s$^{-1}$. At such a rate the residual emission from RhB, whose singlet-state lifetime in the absence of quenching is 1.85 ns (Table 1, 3a), would be truly negligible. For example, in the case of 5, the predicted quantum yield of RhB fluorescence is about $10^{-4}$, whereas in our experiments it was 0.004 (Table 1, 5). The observed discrepancy suggests that either rigid structural features are present in dyads 5 and 6, which cause a major decrease in the value of the orientation factor $\kappa^2$, or contaminant fluorescence from an impurity, e.g., unbound RhB, is the source of the increased apparent fluorescence quantum yield. The intensity of the contaminant fluorescence, however, is very small and does not affect our calculations (see below). Energy-transfer efficiencies in 5 and 6 were estimated by comparing their excitation ($\lambda_{\text{em}} = 770$ nm) and absorption spectra (Figure 2C), scaled to the same value at the Q-band maximum ($\lambda_{\text{ex}} = 611$ nm). In such measurements, an exact match between the absorption and excitation spectra would signify energy transfer with 100% efficiency. The intensities of the RhB bands ($\lambda_{\text{max}} = 563$ nm) in the excitation spectra of 5 and 6 reveal that only 21% (5) and 83% (6) of the excitation energy absorbed by RhB fragments is transferred to PtTBP. At the same time, RhB fluorescence ($\lambda_{\text{max}} = 587$ nm) in 5 and 6 is negligible compared to the fluorescence of reference compound 3a, taken at the same molar concentration (Figure 2B). Therefore, approximately 79% (5) and 17% (6) of the absorbed energy is consumed in some other process, which in this case is probably ET involving the RhB excited singlet state. This ET will be referred to as ET$_{\text{RhB(S)}}$. The proposed energy-/electron-transfer pathways in RhB–PtTBP systems are shown in Scheme 2. The pathway preferred for our application is shown in the box, and the charge-separated state (CS), formed as a result of the electron transfer, is designated as [RhB–PtTBP]$_{\text{CS}}$. Processes ET$_{\text{RhB(S)}}$, ET$_{\text{PtTBP(S)}}$, and ET$_{\text{PtTBP(T)}}$ compete with the preferred pathway by interfering... respectively with the FRET, with the intersystem crossing, and with the phosphorescence emission, all leading to the same state $[\text{RhB} - \text{PtTBP}]_{\text{CS}}$. The pathways of annihilation of $[\text{RhB} - \text{PtTBP}]_{\text{CS}}$ remained unidentified and are not shown. Unfortunately, our attempts to obtain a spectroscopic signature of the charge-separated state $[\text{RhB} - \text{PtTBP}]_{\text{CS}}$ turned out to be unsuccessful (see Supporting Information for details). The transient spectrum of 5 in the window 400–650 nm (set by the limits of the instrument) was entirely dominated by the broad $T_1 \rightarrow T_2$ band of PtTBP ($\lambda_{\text{max}} = 463$ nm). Similar bands were reported previously for Pd tetraaryltetrazenoporphyrins.\textsuperscript{43c} It is possible that the absorption of the CS state was too weak to be seen on the background of this strong band. Photoinduced electron transfer between RhB and PtTBP nevertheless appears to be the most likely pathway competing with emission in the studied dyads. **B. Electron Transfer.** The directionality of the photoinduced ET in RhB–PtTBP adducts could be resolved by substituting spectroscopic and electrochemical data into the Rehm–Weller equation:\textsuperscript{54} $$\Delta G_{\text{ET}} = E_{\text{ox}}(D) - E_{\text{red}}(A) - \Delta E_{00} + w$$ where $\Delta G_{\text{ET}}$ is the driving force, $E_{\text{ox}}(D)$ is the oxidation potential of the donor, $E_{\text{red}}(A)$ is the reduction potential of the acceptor, $\Delta E_{00}$ is the excitation energy of the photoexcited component (donor or acceptor), and $w = w_P - w_R$ is the work term, consisting of the Coulombic energies of reactants ($w_R$) and products ($w_P$). The values for 1a are $E^1_{\text{ox}} = +0.75$ V and $E^1_{\text{red}} = -1.3$ V vs SCE. In the case of 3a, a reversible reduction wave was observed at $E_{\text{red}} = -0.8$ V, while the oxidation was irreversible and occurred at approximately $E_{\text{ox}} = 1.1$ V. These data are consistent with the earlier reported values for Rhodamine B in EtOH solutions.\textsuperscript{55} The excitation energy for RhB was estimated from the intersection of its normalized absorption and fluorescence spectra ($\lambda = 565$ nm, $\Delta E_{00} = 2.2$ eV). The value for the PtTBP $\pi-\pi^*$ triplet was derived from the phosphorescence maximum ($\lambda_{\text{max}} = 772$ nm, $\Delta E_{00}(T) = 1.61$ eV). The energy of the PtTBP singlet state could be derived from the phosphorescence maximum and the magnitude of the singlet–triplet splitting ($2J = 0.38$ eV), determined from the difference between the $S_0 \rightarrow S_1$ ($\lambda_{\text{max}} = 611$ nm) and $S_0 \rightarrow T_1$ ($\lambda_{\text{max}} = 762$ nm; see below) absorption maxima: $\Delta E_{00}(S) = 1.99$ eV. The energy diagram of the ET in RhB–PtTBP systems and the corresponding frontier orbital levels are shown in Figure 3. The ET from RhB onto PtTBP appears to be endergonic ($\Delta G_{\text{ET}} > 0$), in spite of the small favorable Coulombic work term ($w = -0.06$ eV),\textsuperscript{56} which is due to the stabilizing interaction between PtTBP anion and RhB dication. (RhB is a cation by itself, and upon the ET onto PtTBP its net charge becomes +2). The ET in the opposite direction, from PtTBP onto RhB, is exergonic ($\Delta G_{\text{ET}} < 0$) for all the excited states, and the work terms in all three cases are close to zero. As discussed above, ET$_{\text{RhB(S)}}$ (1; $\Delta G_{\text{ET}} = -0.65$ eV) competes effectively with the FRET. Assuming the theoretical rate of the FRET as calculated above ($k_{\text{FRET}} = 2.5 \times 10^{12}$ s$^{-1}$) and considering that ratios between efficiencies of ET$_{\text{RhB(S)}}$ and FRET in dyads 5 and 6 are 3.76 and 0.20, respectively ($\phi_{\text{ET}}(5)/\phi_{\text{FRET}}(5) = 79/21 \approx 3.76$ and $\phi_{\text{ET}}(6)/\phi_{\text{FRET}}(6) = 17/83 \approx 0.20$), we arrive at the upper bound estimate for the rate of ET$_{\text{RhB(S)}}$, i.e., $k_{\text{RhB(S)}} = 10^{12}-10^{13}$ s$^{-1}$. Following the FRET, another ET (2; ET$_{\text{PtTBP(S)}}$), leading to the same charge-separated state is initiated, with the driving force of about 0.2 eV less than that of ET$_{\text{RhB(S)}}$. This ET competes directly with the intersystem crossing within the PtTBP macrocycle. The existing photophysical data on Pt porphyrins\textsuperscript{20a,c,57} indicate that intersystem crossing occurs in these molecules on a subpicosecond time scale. Therefore, RhB–PtTBP(T) is most certainly generated at a rate comparable to that of ET$_{\text{PtTBP(S)}}$. Once formed, the state RhB–PtTBP(T) in turn undergoes the electron-transfer reaction ET$_{\text{PtTBP(T)}}$ (3). Since electron-transfer processes generally occur with conservation of spin,\textsuperscript{58} the processes ET$_{\text{PtTBP(S)}}$ and ET$_{\text{RhB(S)}}$ lead to the same singlet biradical charge-separated state, designated in Figure 3A as $[\text{RhB}^- - \text{PtTBP}^+]_{\text{CS}}(S)$. On the contrary, ET$_{\text{PtTBP(T)}}$, originating in the triplet state, gives the triplet pair of doublets $[\text{RhB}^- - \text{PtTBP}^+]_{\text{CS}}(T)$. The exchange energy ($2J$) for these transient species is not known, but in general 2J values are much smaller for CS states than for individual chromophores.\textsuperscript{58} For the purposes of our analysis we assumed that 2J was 1/10 of that for PtTBP, i.e., about 0.04 eV. This assumption results in the upper bound estimate for the driving force for ET$_{\text{PtTBP(T)}}$, i.e., $\Delta G_{\text{ET}} \approx -0.1$ eV. A straightforward way to prevent unwanted quenching in antenna–triplet emitter complexes would be to reduce the driving force of the ET. It follows from Figure 3A and the Rehm–Weller formula that $\Delta G_{\text{ET}}$ can be made less negative by either decreasing the reduction potential of the acceptor (RhB) or by increasing the oxidation potential of the donor (PtTBP). At the same time, the HOMO–LUMO gaps of the donor and the acceptor should be kept close in order to maintain large spectral overlap integrals for maximally efficient FRET. Such redox tuning can in principle be accomplished changing the dye’s peripheral substitution, i.e., using acceptor groups to increase $E_{\text{ox}}$ and donor groups to decrease $E_{\text{red}}$.\textsuperscript{55d} $\sigma$-Donors and $\sigma$-acceptors would be preferred, as those are less likely to alter the spectroscopic properties. Nevertheless, it follows from the orbital diagrams (Figure 3B–D) that by raising the RhB’s HOMO too high or lowering the PtTBP’s HOMO too low the ET’s direction can be reversed. In practice, precise adjustment of the potentials in order to eliminate the ET might become an extremely tedious task, given that the antenna and the emitter dyes already must satisfy a number of criteria, e.g., high 2PA cross section and strong phosphorescence. Therefore, having an additional mechanism for tuning the ET rates and thus maximizing the sensor performance would be highly desirable. C. Distance Dependences. Electron-transfer theories predict that rates of intramolecular ET reactions decay exponentially with the distance $r$ between the donor and acceptor sites: $k_{\text{ET}} = \nu \exp(-\beta r)$, where parameter $\beta$ is related to the magnitude of the electronic interaction between the donor and the acceptor and, therefore, is dependent on the nature of the linker between the donor and the acceptor; $\nu$ includes the terms dependent on the driving force $\Delta G_{\text{ET}}$ and the reorganization energy $\lambda$. Considering that ET$_{\text{RhB(S)}}$ and ET$_{\text{PtTBP(S)}}$ in our scheme compete with very fast processes, i.e., FRET and intersystem crossing, reducing the efficiencies of these two ET reactions should be relatively easy. Indeed, increasing the separation between PtTBP and RhB by only three $\sigma$-bonds (6 vs 5) had a pronounced effect on the ratio of the ET$_{\text{RhB(S)}}$ and FRET quantum yields ($\phi_{\text{RhB(S)}}/\phi_{\text{FRET}}$), changing it by as much as 18.4 times, from 3.76 to 0.20. Using the expression for $k_{\text{ET}}$ (above) and assuming that (1) at the separation between transition dipoles $r_{\text{FRET}} = 13$ Å (as in 5) the FRET has its theoretical rate ($k_{\text{FRET}} = 2.5 \times 10^{12}$ s$^{-1}$), (2) the edge-to-edge distance between RhB and PtTBP, corresponding to $r_{\text{FRET}} = 13$ Å, is $r_{\text{ET}} = 7.5$ Å; and (3) the increase in the separation ($\Delta r$) going from 5 to 6 equals about 3.4 Å, we obtain $\beta = 5.4 \times 10^7$ cm$^{-1}$ and $\nu = 5.7 \times 10^{14}$ s$^{-1}$. This value of $\nu$ appears to be unrealistically high, as typical pre-exponential factors $\nu$ in the distance-dependence equation do not exceed $10^{13}$ s$^{-1}$. The error in our calculation is most likely caused by the overestimation of $k_{\text{FRET}}$ for dyads 5 and 6, in which the distances between chromophores are quite short (10–15 Å). As pointed out in a recent study, incomplete orientational averaging ($\kappa^2 < 2/3$) often occurs in molecules connected by short saturated linkers, such as in our models 6 and especially 5. In addition, at short distances, Förster point-dipole approximation is inaccurate, and the rate of the Coulombic energy transfer must be evaluated by quantum-mechanical methods, which might result in rates lower than those predicted by the Förster model. Taking this into account, we now assume that the preexponential factor in the case of ET$_{\text{PtTBP(T)}}$ is $10^{13}$ s$^{-1}$, and use parameter $\beta$ as determined for ET$_{\text{RhB(S)}}$. Considering that the rate of the phosphorescence emission of PtTBP 1a in the absence of oxygen is $k_{\text{phos}} = 3.03 \times 10^4$ s$^{-1}$ ($\tau_0 = 33$ $\mu$s, Table 1), we estimate that in order for the phosphorescence to be, e.g., 10 times more effective than quenching by the electron transfer ($k_{\text{phos}}/k_{\text{PtTBP(T)}} = 10$), the chromophores must be placed at a distance $r_{\text{phos}} \approx 40$ Å. At this separation, FRET between RhB and PtTBP should have the rate of $2.9 \times 10^9$ s$^{-1}$ and should be about 5.4 times more efficient than RhB fluorescence ($k_{\Omega} = 1/\tau_{\Omega} = 5.4 \times 10^8$ s$^{-1}$). As a result, by separating the chromophores using nonconducting linkers as in 5 or 6 we should be able to gain significantly in the phosphorescence quantum yield while maintaining high efficiency of the FRET. The graphs illustrating these conclusions are shown in Figure 4 and the details of calculations are given in the Supporting Information. It also should be mentioned that at the separation $r_{\text{phos}} = 40$ Å the rate of electron transfer ET$_{\text{PtTBP(S)}}$ will constitute only a negligible fraction ($\sim 10^{-6}–10^{-7}$) of the intersystem crossing rate within PtTBP molecule. We would like to emphasize that the assumptions underlying our analysis are quite crude. First, as we already mentioned, Förster’s theory might be inaccurate in predicting the energy-transfer rates for compounds with short spacers, as in 5 and 6. Second, considering the flexibility of the linkers and short distances, direct contacts in 5 and 6 might greatly facilitate ET processes and exceedingly high values for the parameter $\beta$, resulting in errors when extrapolating to longer distances. Finally, when calculating the distance at which the phosphorescence can effectively compete with ET$_{\text{PtTBP(T)}}$, we assumed that $\nu = 10^{13}$ s$^{-1}$ but used $\beta = 5.4 \times 10^7$ cm$^{-1}$ obtained for ET$_{\text{RhB(S)}}$. Therefore, the above discussion provides only a rough estimation of rates and distances; however, it demonstrates the principles of optimization of 2PA antenna–core systems via distance tuning. RhB–PtTBP Adducts with Polyproline Spacers. It is well-established that saturated spacers can act as “insulators” between donor and acceptor motifs, providing control over electron-transfer rates. The efficiency of this approach has been proven in a variety of models. Chromophores used in the past to study the distance dependence of electron transfer include heme proteins with porphyrins containing Zn, Mg, Cd, Pt, and Pd and possessing long-lived triplet states as well as other triplet emitters. To implement distance tuning in RhB–PtTBP dyads, we considered rigid oligoproline spacers. Oligoprolines (Pro$_n$) are known to form rigid spiral rods in solution and have been employed previously as “spectroscopic rulers”. The peptide bond in oligoprolines can adopt either cis or trans conformation, with the helix translation step of 3.12 Å (per proline unit) in fully trans conformation vs 1.85 Å in fully cis conformation. Optical rotatory dispersion (ORD), circular dichroism (CD), and NMR studies of oligoprolines Pro$_n$ suggest that, for $n > 5$, the helix exists exclusively in trans conformation in most solvents (water, alcohol, acetic acid, DMSO, chloroform). For shorter oligoprolines ($n = 2–4$), the cis conformation is also present, Figure 5. Optimized structure of RhB–Pro$_{10}$–PtTBP (MM+ force field). Decaproline spacer in its trans conformation provides separation of 42 Å between RhB and PtTBP chromophores. **SCHEME 3: PtTBP–RhB Adducts with Decaproline (Pro$_{10}$) Spacers** especially in less-polar solvents, although trans conformation is still predominant. Based on the analysis presented above, we chose to connect RhB and PtTBP by decaproline spacers (Pro$_{10}$). In its fully trans conformation, Pro$_{10}$ is approximately 31 Å long, and according to molecular modeling it should provide separation between RhB and PtTBP of about 42 Å (Figure 5). This distance should be adequate for suppressing ET$_{\text{PtTBP:T}}$. RhB–Pro$_{10}$ conjugate 7 and dyads 8 and 9 are shown in Scheme 3, and their synthesis and characterization are described in detail in the Supporting Information. **Spectroscopy.** The emission, excitation, and absorption spectra of compound 8, as well as of the reference compounds 1a and 3a, are shown in Figure 6. As expected, the absorption spectrum of 8 matches that of 5 and presents nearly a superposition of the spectra of the individual chromophores RhB (3a) and PtTBP (1a) (Figure 2). The efficiency of the FRET ($\phi_{\text{FRET}} = 0.84$) and the phosphorescence quantum yield ($\phi_P = 0.052$) of adduct 8 were greatly improved compared to those of 5. Upon excitation at 520 nm, where the absorbance of PtTBP itself is weak, phosphorescence from adduct 8 was more than 100 times stronger than that of 5 and 10 times stronger than that of the reference porphyrin 1a. However, the phosphorescence quantum yield of PtTBP in 8 appeared to be only 63% of that for 1a (0.052 vs 0.083 in EtOH, Table 1). According to our calculations, at the distance of 40 Å the rate of quenching by ET$_{\text{PtTBP:T}}$ should constitute only about 10% of the rate of PtTBP phosphorescence, and the quantum yield of the phosphorescence should be about 0.075. As mentioned earlier, parameter $\beta$, used Figure 6. Emission (A), excitation (B), and absorption (B) spectra of adduct 8 and spectra of reference compounds 1a, 3a, and 5 in EtOH. All measurements were performed in deoxygenated solutions. (A) The emission spectra were normalized by the absorbance at $\lambda_{\text{ex}} = 520$ nm. (B) Excitation spectra were recorded for $\lambda_{\text{em}} = 770$ nm. Absorption and excitation spectra were normalized by the intensity at 611 nm (lowest energy $S_1$ state), which gives rise to the emitting $T_1$ state. Figure 7. Fluorescence decays of adduct 8 and reference RhB 3a in MeOH/THF = 1:1, $\lambda_{\text{ex}} = 532$ nm (A), and corresponding lifetime distributions (B), obtained by the MEM. in our calculations, was most likely too high to be applied to ET$_{\text{PtTBP(T)}}$, and it is possible that the residual electron transfer was responsible for the decrease in the quantum yield. On the other hand, the phosphorescence quantum yield of 8 practically did not change upon changing the solvent (Table 1), whereas ET rates are typically very sensitive to the solvent dielectric constant. It is therefore also possible that lower phosphorescence quantum yield is an intrinsic property of PtTBP chromophore in 8. Emission spectra of 8 and 3a (Figure 6A) reveal that 84% of RhB fluorescence in 8 is quenched. On the other hand, comparison of the excitation and the absorption spectra of 8 (Figure 6B) shows that the same 84% of the excitation energy is being transferred to PtTBP. Thus, ET$_{\text{RhB(S)}}$ and ET$_{\text{PtTBP(S)}}$, involving short-lived singlet states, were entirely suppressed by inserting a decaproline spacer between RhB and PtTBP. At the same time, the FRET efficiency in 8 remained quite high ($\phi_{\text{FRET}} = 0.84$). Amazingly, calculations based on the Förster theory predict that for the RhB—PtTBP pair the FRET efficiency at the distance of 40 Å should be $\phi_{\text{FRET}} = 0.84!$ Although such an agreement is probably a coincidence, good correspondence of experimental rates to the Förster model is expected at larger separations between donor and acceptor sites. The steady-state evaluation of the FRET efficiency was confirmed by fluorescence time resolved measurements. Fluorescence decays of compounds 8 and 3a and the corresponding lifetime distributions$^{63}$ are shown in Figure 7. In the absence of quenching, 3a reveals practically a single-exponential decay, i.e., a narrow uniform distribution of lifetimes (Figure 7B), centered at around $\tau_{\text{av}} = 1.82$ ns.$^{64}$ In the case of 8, the distribution moves to shorter lifetimes ($\tau_{\text{av}} = 0.33$ ns) and broadens asymmetrically, which is expected for the FRET.$^{65}$ The ensemble broadening reflects the distribution of distances between the donor and the acceptor in the dyad. A small second maximum around 1.8 ns is due to the contamination of 8 by unbound RhB. The ratio of the distribution averages for 8 and 3a corresponds to the FRET efficiency of 0.82, which is very close to the value calculated from the steady-state measurements (0.84). Our last model compound, 9, was designed to evaluate how the increase in the number of the antenna chromophores would influence the overall performance of the 2PA phosphorescent sensors. The phosphorescence quantum yield of 9 was 0.028, which again can be explained by the residual quenching via ET$_{\text{PtTBP(T)}}$ and taking into account that the number of quenching chromophores (RhB) in 9 is 4 times that in 8. RhB fluorescence in 9 was quenched by 84%, and the fluorescence lifetime measurements were in good agreement with the spectroscopic data. However, only 55% of the absorbed energy, according to the absorption/excitation spectra, was transferred to the PtTBP. Because distances between the RhB and PtTBP moieties in 9 are the same as in 1, and ET$_{\text{RhB(S)}}$ in 8 was shown to be negligible, quenching of RhB fluorescence by intramolecular electron transfer was ruled out. It is possible that a part of the quenching arises from the formation of nonemitting RhB aggregates within the molecule of 9. Although Pro$_{10}$ rods are rigid, their linkages to PtTBP $meso$-aryl rings are quite flexible, and RhB termini in 9 can easily experience close contacts with each other in solution. Self-quenching in rhodamine aggregates is a well-documented phenomenon.\textsuperscript{66} **Two-Photon-Excitation Experiments.** Two-photon-excitation experiments were designed to (1) estimate 2PA cross sections of core PtTBPs and (2) quantify enhancement of the 2PA-induced phosphorescence via energy transfer from the antenna rhodamines. 2PA cross sections ($\sigma_2$) were measured by the relative emission method. To quantify enhancement of the signal in antenna–emitter systems, we used the apparent gain parameter $\gamma$, which relates emission from the molecular device D ($I^D$) to that of the “naked” core C ($I^C$): $$\gamma = \frac{I^D}{I^C} \quad (2)$$ In addition, parameter $\gamma_e$ was used to characterize the expected gain:\textsuperscript{67} $$\gamma_e = \frac{(\sigma_2^A \phi_{FRET}^D + \sigma_2^C) \phi_p^D}{\sigma_2^C \phi_p^C} \quad (3)$$ where $\sigma_2^A$ and $\sigma_2^C$ are the 2PA cross sections of the antenna and the core, respectively, the quantum yields of the phosphorescence ($\phi_p$) and the FRET ($\phi_{FRET}$) are determined from independent linear measurements, and superscripts “A”, “C”, and “D” refer to the antenna, the core, and the whole device, respectively. Formula 3 is useful because it allows estimation of the 2PA cross section of the device, provided that the FRET and the phosphorescence are independent of the excitation type (1P vs 2P). As mentioned in the Introduction, in all 2PA antenna/core systems described so far,\textsuperscript{26,27} experimental gain coefficients $\gamma$ were significantly lower than the theoretically expected ($\gamma_e$), especially in the systems designed for singlet oxygen sensitization.\textsuperscript{27} In these molecules, powerful 2PA antenna dyes (hundreds to thousands of GM units) have been utilized, and the apparent fluorescence quantum yields were high. Nevertheless, gain factors appeared to be tens-to-hundreds of times lower than expected form 2PA cross sections of the antenna. **A. Core Porphyrins.** Evaluation of 2PA cross sections of phosphorescent PtTBPs was necessary for quantification of the enhancement effect. Tetraaryltetrazenzoporphyrins possess highly nonplanar molecular structures. Saddling of the porphyrin macrocycle leads to the loss of the center of symmetry and, as a result, should affect its 2PA cross section. In addition, $\pi$-extension of the porphyrin macrocycle might have its own influence on the 2PA. To estimate 2PA cross sections of tetraaryltetrazenzoporphyrins, we used fluorescent nonplanar free-base tetraphenyltetrazenzoporphyrin 10,\textsuperscript{46b} instead of phosphorescent Pt complexes 1 and 2, and tetracarboxyphenylporphyrin 11\textsuperscript{68} as a reference planar porphyrin (Figure 8). Measurement by the relative emission method is based on the comparison between emissions from the sample of interest and a standard with the known 2PA cross section. It is necessary that measurements are performed significantly below the saturation limit, where power dependence of the signal follows the quadratic law. In the case of phosphorescent samples, signal saturation upon excitation by high repetition rate lasers occurs already at quite low powers due to long triplet-state lifetimes (tens of microseconds), whereas measurements at lower powers are inaccurate because of low signal-to-noise ratios. (An indication of the saturation effect has been observed in our experiments.) In addition, triplet states of porphyrins can also be capable of multiphoton absorption, and transitions like $T_1 \rightarrow T_2$ via 2PA can additionally distort measurements of the ground-state 2PA cross sections. Using low repetition rate (e.g., 1 kHz) regenerative amplifiers would help solving these problems, although high per-pulse powers would be required to obtain adequate signals. On the other hand, standard high repetition rate Ti:sapphire oscillators are entirely suitable for measuring fluorescent chromophores with lifetimes below, e.g., 5 ns. Therefore, using fluorescent porphyrin 10 instead of 2 was a convenient way to evaluate the 2PA cross section of the tetraaryltetrazenzoporphyrin macrocycle. Although porphyrins 10 and 11 are reasonably well soluble in aqueous solutions at basic pH, to avoid aggregation they were bound to bovine serum albumin (BSA, 1% aqueous solution). BSA is known to form complexes with porphyrins and has been used as a porphyrin carrier for construction of oxygen-sensitive probes.\textsuperscript{16} Our laser system (76 MHz repetition rate) was tuned to 840 nm. The emission spectra of 10 and 11, normalized by molar concentrations and extinctions at $\lambda_{ex}$, and their power dependence plots are shown in Figure 9. The plot of fluorescence of Rhodamine B, used as a standard, is also shown for comparison. Power dependences for Rhodamine B and 11 exhibit almost pure second order. In the case of 10 linear absorption was still significant, indicating that for 2P-imaging experiments the wavelength should be shifted further to the red. The values of 2PA cross sections for 10 and 11 were calculated as averages for all excitation powers, in the case of 10 after subtraction of the linear component. The calculations were based on the following data: for 10, $\epsilon(646 \text{ nm}) = 31 \times 10^3 \text{ M}^{-1} \text{ cm}^{-1}$, $\phi_l = 0.032$,\textsuperscript{69} for 11, $\epsilon(517 \text{ nm}) = 19 \times 10^3 \text{ M}^{-1} \text{ cm}^{-1}$,\textsuperscript{70} $\phi_l = 0.15$,\textsuperscript{69} for Rhodamine B, $\epsilon(547 \text{ nm}) = 107 \times 10^3 \text{ M}^{-1} \text{ cm}^{-1}$,\textsuperscript{71} $\phi_l = 0.85$,\textsuperscript{40} $\sigma_2(840 \text{ nm}) = 200 \text{ GM}$.\textsuperscript{8a} The 2PA cross section of planar porphyrin 11 was found to be low (about 2 GM), consistent with its high symmetry and with earlier reported measurements.\textsuperscript{31,26} The 2PA cross section of tetrazenzoporphyrin 10 appeared to be more than 10 times higher than that of 11, supporting expectations regarding the effects of nonplanarity and, possibly, of $\pi$-extension. The $\sigma_2$ value of 28 GM for porphyrin 10 is consistent with the earlier reported numbers,\textsuperscript{21b} and gives a rough approximation for the 2PA cross sections of PtTBPs 1 and 2. Attempts to directly measure 2PA cross sections of PtTBPs led us to an interesting and relevant observation. To monitor phosphorescence of porphyrins 1a and 2b and dyads 8 and 9, we used a time-resolved-phosphorescence measurement system, which was coupled to a regenerative amplifier (30 fs, 1 kHz) operating at $\lambda_{\text{max}} = 820 \text{ nm}$. Using the low repetition rate laser allowed collection of complete phosphorescence decays, making it possible to avoid the saturation effects and triplet–triplet excitation via 2PA. The power dependence plots for both cores 1a and 2b turned out to be practically linear (Figure 10A) in spite of the fact that the laser excitation was more than 200 nm away from the lowest energy linear absorption band ($\lambda_{\text{max}} = 611–615 \text{ nm}$). Detailed Figure 9. Fluorescence spectra of **10**, **11**, and reference Rhodamine B (A) upon excitation at 840 nm (110 fs). Spectra are normalized by molar concentrations. **10** and **11** were dissolved in 10 mM phosphate buffer in the presence of 1% BSA, pH ~8.5. Rhodamine B was dissolved in EtOH. To obtain power dependences (B), integral intensities of fluorescence were normalized by molar concentrations and fluorescence quantum yields. For **10** (○), the plot was fit to a second-order polynomial, and the linear component was subtracted to render the pure quadratic dependence (▲). Figure 10. (A) Power dependence plots of the phosphorescence of **1a** and **2b** in deoxygenated DMF upon excitation at 820 nm (30 fs, 1 kHz). (B) $S_0 \rightarrow T_1$ linear absorption band in the spectrum of **2b**: $\lambda_{\text{max}} = 762$ nm, $\epsilon = 120$ M$^{-1}$ cm$^{-1}$. Figure 11. Phosphorescence power dependence plots of adducts **8** and **9** and of reference porphyrins **1a** and **2b** in deoxygenated DMF upon excitation by 30 fs pulses ($\lambda_{\text{ex}} = 820$ nm, 1 kHz) (A, B). Emission decays were integrated to give the intensity for each excitation power. The plots were normalized by molar concentrations and quantum yields. (C) Quadratic components of the plots for **8** and **9**, obtained by fitting the raw data (A and B) with second-order polynomial and subtracting the obtained linear components. The fits are shown by dashed (**8**) and solid (**9**) lines, yielding the amplification ratio of 2.95. Examination of the absorption spectrum of **2b**, taken at a very high concentration ($\sim 10^{-3}$ M), revealed the presence of a band ($\lambda_{\text{max}} = 762$ nm, $\epsilon = 120$ M$^{-1}$ cm$^{-1}$) which was attributed to $S_0 \rightarrow T_1$ absorption (Figure 10B). The presence of relatively strong spin-forbidden transitions in the spectra of Pd and Pt porphyrins is known from the literature.\textsuperscript{57b} These are usually attributed to strong spin–orbit couplings induced by the heavy atoms. However, the extinction coefficient of 120 M$^{-1}$ cm$^{-1}$ is the highest, to our knowledge, reported for direct singlet–triplet absorption. Since the spectrum of the femtosecond source is intrinsically broadened due to the high temporal compression, overlap of the laser with the $S_0 \rightarrow T_1$ band results in the linear rather than 2P excitation of phosphorescence. From the point of view of 2P imaging this means that using PtTBP s would require laser sources operating above, e.g., 900 nm, where intrinsic 2PA cross sections of PtTBP s may be lower than in the region near 800 nm. **B. RhB–PtTBP Assemblies.** The power dependences of the phosphorescence from **8** and **9** are shown in Figure 11A,B together with reference plots for porphyrins **1a** and **2b**. In spite of the interference by the linear $S_0 \rightarrow T_1$ absorption (Figure 10B), the plots for **8** and **9** reveal notable second-order contribution, especially at higher powers, which is evidently due to the 2PA by the RhB antennae. The corresponding plots of the reference porphyrins 1a and 2b are practically linear. Fitting the normalized data for 8 and 9 by second-order polynomials and subtracting the linear components rendered pure quadratic plots (Figure 11C), from which the enhancement ratio of 9 vs 8 could be calculated as the ratio of coefficients $b^{(9)}/b^{(8)} = 2.95$. This value is even slightly higher than the theoretically predicted ratio of 2.4, obtained by applying eq 3 to the pair 9 vs 8 and accounting for the difference in the quantum yields of these compounds. To use eq 3, we considered that 9 has three more RhB ($\sigma_2 = 180$ GM) units than 8, and assumed that the 2PA cross section of the core PtTBP in both 8 and 9 is the same as for the model porphyrin 10, i.e., 28 GM. The corresponding coefficients $\gamma$ and $\gamma_c$ (eqs 2 and 3) for 9 vs 8 were in excellent agreement, i.e., 1.25 and 1.28, respectively, indicating linear increase in the 2PA cross section with an increase in the number of RhB antenna units going from mono-RhB to tetra-RhB adduct. Estimation of the 2PA enhancement effect for compounds 9 and 8 relative to their parent core porphyrins 1a and 2b was complicated due to the linear $S_0 \rightarrow T_1$ transition in the absorption spectrum of PtTBP. The apparent enhancement ratios for 8 vs 1a and 9 vs 2b, normalized by the phosphorescence quantum yields, were 1.7 and 1.3, respectively, which is substantially lower than the values calculated based on 2PA cross sections of the components, i.e., 6.4 and 15.1. To move the excitation away from $S_0 \rightarrow T_1$ band, we performed measurements of compound 8 and of its reference porphyrin 1a using the source operating at 840 nm (110 fs, 76 MHz). The corresponding corrected emission spectra are shown in Figure 12. The long wavelength edges of the phosphorescence peaks of both 8 and 1a are truncated as a result of the correction for the excitation leak. The laser radiation intensity at above 800 nm was still quite high and diminished the accuracy of the spectral registration. Nevertheless, the apparent gain coefficient $\gamma = 4.2$, determined by this method for 8 vs 1a, was found to be reasonably close to the predicted gain $\gamma_c = 3.7$. On the basis of this value of $\gamma$, the 2PA cross section of PtTBP in 8 was calculated to be 23.8 GM, which is very close to the value of $\sigma_2$ determined for free-base tetrazabenzoporphyrin 10. The ratios of PtTBP phosphorescence/RhB fluorescence were dramatically different whether 2P or 1P excitation was used. This decrease is likely to be a manifestation of the saturation effect or 2P triplet–triplet absorption in 8, upon irradiation by high repetition rate lasers; however, further studies will be required to elucidate the origin of this effect. **Conclusions** Our foregoing experiments demonstrate that enhancement of 2PA-driven triplet-state generation can be accomplished via energy transfer from an appropriately chosen 2PA antenna. However, the molecular architecture of multiphoton phosphorescent systems requires to be manipulated in order to avoid interfering processes, such as intramolecular electron transfer. Several key conclusions, relevant to the design of enhanced phosphorescent sensors and multiphoton singlet oxygen sensitizers, are summarized below. First, in these systems quenching of excited states, especially long-lived triplet states, by the competing electron transfer is a persistent problem. While triplet electron transfer can be reliably observed and quantified in the case of phosphorescent sensors, quenching of “dark” triplet states in singlet oxygen sensitizers can be not as apparent, and also detrimental to the overall performance. Since most 2PA dyes are conjugated, polarizable molecules, electron transfer is intrinsic to their combination with long-lived photoexcited chromophores. Careful redox tuning of 2PA dyes and/or triplet-state cores should be performed in order to minimize the driving force for unwanted electron-transfer processes. Second, by using rigid oligoproline spacers for separating RhB antennae from triplet PtTBP cores in RhB–PtTBP assemblies, we have demonstrated that distance tuning can be an effective general method for optimization of 2PA FRET-based triplet systems. A distance can be selected at which electron-transfer processes are suppressed, while long-range Förster energy transfer still occurs with high efficiency. Third, the presence of “hidden” low-energy transitions in the spectra of triplet-state emitters, e.g., metalloporphyrins with increased $\pi$-conjugation, might present a problem for excitation by a pure multiphoton mechanism. Such transitions are intrinsic to the systems with enhanced singlet–triplet conversion pathways, e.g., by the heavy atom effect. For example, in the case of PtTBP, a strong spin-forbidden $S_0 \rightarrow T_1$ transition could be observed in the close vicinity of the Ti:sapphire laser spectrum, preventing efficient excitation by the 2PA mechanism. Finally, our measurements demonstrate that the loss of the center of symmetry due to nonplanar distortion of the porphyrin macrocycle and its $\pi$-extension lead to a significant increase in 2PA cross section. For example, the 2PA cross section of saddled tetraaryltetrabenzoporphyrins in the region of 800 nm was estimated to be about an order of magnitude higher than that of a planar tetraarylporphyrin. The possibility of using asymmetric phosphorescent porphyrins directly as 2PA oxygen sensors will be explored in the future. It should be mentioned that phosphorescence of all PtTBP-based molecules studied in this work was found to be extremely oxygen sensitive in organic solutions. However, determination of the actual Stern–Volmer oxygen quenching constants for RhB–PtTBP assemblies will need to be accomplished after modifying these systems to solubilize them in aqueous environments, e.g., by attaching appropriate dendritic arms.\textsuperscript{26} It is not clear at this point whether RhB–PtTBP systems themselves will become the probes of choice for 2P oxygen microscopy, primarily because of the difficulties associated with hidden $S_0 \rightarrow T_1$ bands of PtTBP cores. Nevertheless, analysis of these systems appeared to be informative and useful for future construction of optimized 2PA-enhanced functional triplet core systems. J. R. *J. Am. Chem. Soc.* **1989**, *111*, 4353–4356. (f) Therien, M. J.; Selman, M.; Gray, H. B.; Chang, I. J.; Winkler, J. R. *J. Am. Chem. Soc.* **1990**, *112*, 2420–2422. (62) (a) Steinberg, I. Z.; Harrington, W. F.; Berger, A.; Sela, M.; Katchalski, E. *J. Am. Chem. Soc.* **1960**, *82*, 5263–5279. (b) Engel, J. *Biopolymers* **1966**, *4*, 945. (c) Schimmel, P. R.; Flory, P. J. *Proc. Natl. Acad. Sci. U.S.A.* **1967**, *58*, 52. (d) Stryer, L.; Haugland, R. P.; *Proc. Natl. Acad. Sci. U.S.A.* **1967**, *58*, 719. (e) Deber, C. M.; Bovey, F. A.; Carver, J. P.; Blout, E. R. *J. Am. Chem. Soc.* **1970**, *92*, 6191. (f) Torchia, D. A.; Bovey, F. A. *Macromolecules* **1971**, *4*, 146. (g) Vassilian, A; Wishart, J. F.; van Hemelryck, B.; Schwarz, H.; Isied, S. S. *J. Am. Chem. Soc.* **1990**, *112*, 7278–7286. (63) Distributions were recovered by the MEM (Livesey, A. K.; Brochon, J. C. *Biophys. J.* **1987**, *52*, 693–706), implemented as a recursive algorithm (Vinogradov, S. A.; Wilson, D. F. *Applied Spectrosc.* **2000**, *54*, 849–855). (64) The width of a distribution recovered by a regularized inversion method, e.g., the maximum entropy method (MEM), symbolizes the uncertainty in the parameters evaluation and, therefore, is partially related to the noise in the data.\(^{63}\) (65) Wagner, B. D.; Ware, W. R. *J. Phys. Chem.* **1990**, *94*, 3489–3494. (66) For examples, see: (a) Chibisov, A. K.; Slavnova, T. D. *J. Photochem.* **1978**, *8*, 285–297. (b) Arbeloa, I. L.; Ojeda, P. R. *Chem. Phys. Lett.* **1982**, *87*, 556–560. (c) Hoekstra, D.; de Boer, T.; Klappe, K.; Wilschut, J. *Biochemistry* **1984**, *23*, 5675–5681. (d) Arbeloa, F. L.; Ojeda, P. R.; Arbeloa, I. L. *J. Chem. Soc., Faraday Trans. 2* **1988**, *84*, 1903–1912. (67) A less general formula was used in our previous paper (ref 26), in which we assumed that the 2PA of the “naked” core was negligible compared to that of the antenna. When PtTBPs are being used as cores, eq 3 should be used instead. (68) Datta-Gupta, N.; Bardos, T. *J. J. Heterocycl. Chem.* **1966**, *3*, 495–502. (69) Measured in this work. (70) Barnett, G. H.; Hudson, M. F.; Smith, K. M. *J. Chem. Soc., Perkin Trans. 1* **1975**, 1401–1403. (71) Meallier, P.; Mouillet, M.; Guittoneau, S.; Chabaud, F.; Chevrou, P.; Niemann, C. *Dyes Pigm.* **1998**, *36*, 161–167.
To, All Members of High Powered Review Board of Brahmaputra Board (As per list enclosed) Sub: 10th meeting of High Powered Review Board of Brahmaputra Board Sir, In continuation of letter NO. BB/5334/2019/2864-2882 dated 14.10.2019, I am directed to enclose the Agenda Note for the 10th meeting of High Powered Review Board of Brahmaputra Board scheduled to be held under the Chairmanship of Hon'ble Union Minister of Jal Shakti Shri Gagendra Singh Sekhawat at 11:30 Hours on 8th November 2019 at Guwahati. Yours faithfully, Enclo; As above Sd/- (विष्णु देव राय) सचिव Copy along with enclosures for kind information to: 1. Hon'ble Minister of State for Development of North Eastern Region 2. All the Chief Secretaries of the Member States((As per list enclosed) 3. PPS to the Union Minister, Ministry of Jal Shakti, Shram Shakti Bhawan, Rafi Marg, New Delhi- 110 001 4. All PPS/PS to the Members of High Powered review Board of Brahmaputra Board 5. P.S. to the Vice-Chairman, Brahmaputra Board, Basistha, Guwahati-29 6. Hindi version follows NIO Copy for favour of information to: 1. The Commissioner(B&B), Ministry of Jal Shakti, DoWR, RD&GR, 2nd Floor, Block -3, CGO Complex, Lodhi Road, New Delhi 110 003 (हरि प्रशाद शैक्षिया) कार्यपालक अभियंता (मु.) ब्रह्मपुत्र बोर्ड, जल संसाधन, नदी विकास एवं गंगा संरक्षण मंत्रालय, बसिष्ठ, गुवाहाटी - 781029 BASISTHA-GUWAHATI-781 029, Web site: www.brahmaputraboard.gov.in, e-mail: firstname.lastname@example.org, Fax: 0361–2301099/2307454/2308588, Telephones: 0361 – 2301099/2308590/2302527/2300128 | No. | Member | Role | |-----|------------------------------------------------------------------------|-----------------------| | 1 | केंद्रीय जल शक्ति मंत्री | अध्यक्ष Chairman | | 2 | अरुणाचल प्रदेश के मुख्यमंत्री या उनके द्वारा विधिवत प्राधिकृत एक कैबिनेट मंत्री | सदस्य Member | | | Chief Minister of Arunachal Pradesh or a Cabinet Minister duly authorized by him | | | 3 | असम के मुख्यमंत्री या उनके द्वारा विधिवत प्राधिकृत एक कैबिनेट मंत्री | सदस्य Member | | | Chief Minister of Assam or a Cabinet Minister duly authorized by him | | | 4 | मणिपुर के मुख्यमंत्री या उनके द्वारा विधिवत प्राधिकृत एक कैबिनेट मंत्री | सदस्य Member | | | Chief Minister of Manipur or a Cabinet Minister duly authorized by him | | | 5 | मेघालय के मुख्यमंत्री या उनके द्वारा विधिवत प्राधिकृत एक कैबिनेट मंत्री | सदस्य Member | | | Chief Minister of Meghalaya or a Cabinet Minister duly authorized by him | | | 6 | मिजोराम के मुख्यमंत्री या उनके द्वारा विधिवत प्राधिकृत एक कैबिनेट मंत्री | सदस्य Member | | | Chief Minister of Mizoram or a Cabinet Minister duly authorized by him | | | 7 | नागालैंड के मुख्यमंत्री या उनके द्वारा विधिवत प्राधिकृत एक कैबिनेट मंत्री | सदस्य Member | | | Chief Minister of Nagaland or a Cabinet Minister duly authorized by him | | | 8 | त्रिपुरा के मुख्यमंत्री या उनके द्वारा विधिवत प्राधिकृत एक कैबिनेट मंत्री | सदस्य Member | | | Chief Minister of Tripura or a Cabinet Minister duly authorized by him | | | 9 | केंद्रीय मंत्री / वित्त राज्य मंत्री | सदस्य Member | | | Union Minister / Minister of State for Finance | | | 10 | केंद्रीय मंत्री / बिजली राज्य मंत्री | सदस्य Member | | | Union Minister / Minister of State for Power | | | 11 | केंद्रीय मंत्री / सड़क परिवहन और राजमार्ग राज्य मंत्री | सदस्य Member | | | Union Minister / Minister of State for road transport and highways | | | 12 | केंद्रीय मंत्री / कृषि राज्य मंत्री | सदस्य Member | | | Union Minister / Minister of State for Agriculture | | | 13 | जल शक्ति राज्य मंत्री | सदस्य Member | | | Minister of State for Jal Shakti | | | 14 | सचिव, भारत सरकार जल शक्ति मंत्रालय, जल संसाधन, नदी विकास एवं गंगा संरक्षण विभाग | सदस्य Member | | | Secretary, Ministry of Jal Shakti, Deptt. Of Water Resources, RD&GR, Government of India | | | 15 | अध्यक्ष, केंद्रीय जल आयोग | सदस्य Member | | | Chairman, Central Water Commission | | | 16 | अध्यक्ष, ब्रह्मपुत्र बोर्ड | सदस्य-सचिव Member-Secretary | | | Chairman, Brahmaputra Board | | | 17 | सदस्य (आरएम), केंद्रीय जल आयोग | स्थायी आमंत्रित Permanent Invitee | | | Member (RM), Central Water Commission | | List of Chief Secretaries of Member States of HPRB of Brahmaputra Board 1. The Chief Secretary, Arunachal Pradesh, Arunachal Pradesh Secretariat, Itanagar -7 61 111 2. The Chief Secretary, Assam, Assam Sachivalaya, Dispur, Guwahati-781 006 3. The Chief Secretary, Manipur, Manipur Secretariat, Imphal -795 001 4. The Chief Secretary, Meghalaya, Meghalaya Secretariat, Shillong -793 001 5. The Chief Secretary, Nagaland, Nagaland Secretariat. Kohima -797 001 6. The Chief Secretary, Tripura, Tripura Secretariat, Agartala – 799 010 7. The Chief Secretary, Mizoram, Mizoram Secretariat, Aizawl -796005 List of PPS/PS to the Members of HPRB of Brahmaputra Board 1. PPS to the Union Minister for Jal Shakti, Shram Shakti Bhawan, Rafi Marg, New Delhi -110 001 2. PPS/PS the Chief Minister of Arunachal Pradesh , Itanagar, - 791111 3. PPS/PS to Chief Minister of Assam, Dispur Guwahati – 781 006 4. PPS/PS to the Chief Minister of Manipur, Imphal -795001 5. PPS/PS to the Chief Minister of Meghalaya, Shillong – 793001 6. PPS/PS to the Chief Minister of Mizoram, MacDonald Hill, Zarkawt, Aizawl, Mizoram - 796001 7. PPS/PS to the Chief Minister of Nagaland, Kohima – 797004 8. PPS/PS to the Chief Minister of Tripura, Khejurbagan, Agartala – 799010 9. PPS/PS to the Union Minister / Minister of State for Finance, A-wing, Shastri Bhawan, Rajendra Prasad Road, New Delhi-110001 10. PPS/PS to the Union Minister / Minister of State for Power, Shram Shakti Bhawan, Rafi Marg, New Delhi 110 001 11. PPS/PS to the Union Minister / Minister of State for road transport and highways, Transport Bhawan, Sansad Marg, New Delhi-110001 12. PPS/PS to the Union Minister / Minister of State for Agriculture, Krishi Bhavan, Dr. Rajendra Prasad Road, New Delhi-110001 10.1 दिनांक 30.12.2017 को काजीरंगा, असम में आयोजित उच्चाधिकार प्राप्त समीक्षा बोर्ड की 9वीं बैठक की चर्चा के रिकॉर्ड नोट की पुष्टि Confirmation of the record note of discussions of the 9th Meeting of High Powered Review Board held on 30.12.2017 at Kaziranga, Assam 10.2 उच्चाधिकार प्राप्त समीक्षा बोर्ड की 9वीं बैठक में लिए गए निर्णयों पर अनुपालना रिपोर्ट Follow up Report on decisions taken in the 9th meeting of High Powered Review Board 10.3 माजुली द्वीप के प्रतिरक्षण के लिए उपाय Measures for protection of Majuli Island 10.4 ब्रह्मपुत्र बोर्ड का पुनर्गठन और एनईडल्यूएमए (NEWMA) की स्थापना का प्रस्ताव Restructuring of Brahmaputra Board and Proposal for setting up NEWMA 10.5 नई पहल New Initiatives- - (ए) नॉर्थ ईस्ट डेटा शेयरिंग सेंटर की स्थापना Setting up of North East Data Sharing Centre - (बी) असुणाचल प्रदेश में आपातनाली आवासीय जीरो घाटी, नागालैंड में फेंक जिले की चाखेसांग जनजाति और असम में बाक्सा जिले की बोडो जनजाति के भूजल प्रबंधन प्रथाओं का वैज्ञानिक प्रसार Scientific dissemination of Ground Water Management practices of Apatoni’s inhabited Ziro Valley in Arunachal Pradesh, Chakhesang tribe of Phek district in Nagaland, Bodo tribe of Baksa district in Assam 10.6 बाढ़ और तटकटाव प्रबंधन के लिए सॉफ्ट मेजर्स Soft Measure for Flood and Erosion Management 10.7 अध्यक्ष महोदय की अनुमति से कोई अन्य मद Any other items with permission of Chairman 10.8 अंग्रेजी बैठक के लिए स्थान और तारीख का निर्धारण Deciding venue and date for next meeting 10th Meeting of High Powered Review Board of Brahmaputra Board Agenda 10.1 Confirmation of the record note of discussions of the 9th Meeting of High Powered Review Board held on 30.12.2017 at Kaziranga, Assam The 9th meeting of the High Powered Review Board was convened on 30.12.2017 at Kaziranga, Assam. Minutes of the 9th meeting was circulated to members of the Review Board. No comment has been received from any member. The record note of discussions may be confirmed by the High Powered Review Board. 10.2 Follow up report on decisions taken in the 9th meeting of High Powered Review Board Follow up report on decisions taken in the 9th meeting of High Powered Review Board are- | Agenda No. | Decision taken during 9th meeting | Action Initiated | |------------|----------------------------------|-----------------| | Address by Hon’ble Minister WR, RD & GR | In his opening address, Hon’ble Minister, WR, RD&GR, RT&H and Shipping advised (i) for integrated development based on best modern technology in a cost-effective manner | Board has started to utilise the data in Preparation of Master Plan and DPR collected with modern tools of technology like GIS etc. Board also introducing Geobags in Bank revetment work in place of Boulder. | | | (ii) to take initiative through subsidy schemes to promote ferries, Ro-Ro boats and river cruise services and develop river transport, inland waterways and tourism on Brahmaputra river that will give employment to youth of the state in the sectors of water transportation and hospitality services. | Government of Assam has Started the Ro-Ro services in Brahmaputra. | | | (iii) to take up actively activities like afforestation | A seminar on ‘Soil Erosion in North Eastern Region’ was | | Agenda No. | Decision taken during 9th meeting | Action Initiated | |-----------|----------------------------------|-----------------| | | with bamboo and catchment area treatment in the upper riparian states to prevent soil erosion and sedimentation. For this purpose MoWR, RD &GR will prepare a proposal through Brahmaputra Board and take it up with MoEF & CC for consideration and release funds through CAMPA | organized on 15th February, 2019 at Administrative Staff College, Khanapara, Assam. The proceedings drawn would be useful for preparation of work plan of soil erosion and integrated Soil Erosion and Flood Management in North Eastern Region and work plan of Brahmaputra Board and circulated to all Board Members | | | (iv) it was directed that work of comprehensive integrated master plan be assigned to WAPCOS, a PSU of MoWR, RD&GR, who may partner with leading global organizations working in the field of integrated water management. | The proposal was processed in the then Ministry of WR, RD&GR. As Brahmaputra Board has already prepared the Master Plan of Brahmaputra, Barak and its tributaries, preparation of Master Plan separately by WAPCOS was not encouraged. | | | (v) to create a Centre for Brahmaputra Studies in IIT, Guwahati covering multidisciplinary aspects of hydrology, environment, inland waterways, agriculture and sociology. For this purpose, he suggested that IIT, Guwahati should identify and earmark a land area of about 4 acres and the Government of India will support for establishment of this Centre | Setting up of Centre for Brahmaputra Studies has been taken up by Guwahati University instead of IIT Guwahati. | | Launch of Mathematical Model Study by IIT Guwahati | Hon'ble Union Minister, Shri Nitin Gadkari launched the Mathematical Model prepared by IIT, Guwahati called Brahma-ID. The project was sponsored by Brahmaputra Board by providing funds to amount | (i) 2D model has also been developed. (ii) Validation of 2D model with physical model is going on at North Eastern Hydraulic and Allied Research Institute (NEHARI) | | Agenda No. | Decision taken during 9th meeting | Action Initiated | |-----------|----------------------------------|-----------------| | | of Rs. 3.00 crore. The Mathematical Model captures the experience of the last 35 years, and provides core solution to water resources problems of flooding, erosion and siltation besides offering economic benefit models. | | | 9.2 Measures for Protection of Majuli Island | Apprised on the responsibility entrusted for the work of protection of Majuli Island from floods and erosion in 2004 and execution of Immediate Measures, Phase-I and Emergent Works and Phase – II & III. Land reclaimed about 22.08 sq.km since 2004 to 2016. New DPR to address the issues of erosion in vulnerable reaches and to reclaim more land by pro-siltation and other measures in the east-west reach length of about 80 km on south bank, a DPR was formulated for “Protection of Majuli Island from flood and erosion of river Brahmaputra” as per the recommendations of the Standing Committee of Experts for Majuli Island and Technical Advisory Committee of Brahmaputra Board (TAC-BB). DPR of Rs. 233.57 Cr. for the above work was approved by Government of India and Ministry of DoNER funded Rs. 207 Cr. under NLCPR mode. The work was under implementation. HPRB ratified the project for | The major works under execution: ✓ Bank revetment with geo bags filled with earth / sand for a reach length of 27 km in 14 locations. ✓ RCC porcupine works in 41 locations Work amounting to Rs. 160.75 crore (Rs. 189.69 crore including GST) was allotted in November 2017. Actual work started at site since February 2018 Physical progress of the scheme up to September, 2019 is 65.13% with an expenditure of Rs.112.59 crore. TAC-BB visited the work sites at Majuli Island 30.04.2019 and furnished their recommendations. As per recommendation of TAC-BB, proposal for taking up of additional works and extra item of work has been framed and approved. The works are under execution. The TAC-BB also visited Majuli from 09.10.2019 to 11.10.2019 and reviewed the progress of protection works. | | Agenda No. | Decision taken during 9<sup>th</sup> meeting | Action Initiated | |------------|---------------------------------------------|-----------------| | | protection of Majuli Island from flood and erosion for the above amount. | | | 9.3 Ratification of the Decisions of 63<sup>rd</sup> Special Meeting of Brahmaputra Board | After detailed deliberations, the HPRB ratified all the decisions of 63rd Special meeting of Brahmaputra Board held on 11th April, 2017. | As per decision of 63<sup>rd</sup> meeting of Brahmaputra Board, the Restructuring proposal was processed and approved by Govt. of India and conveyed on 10.01.2019. Implementation of Restructuring of Brahmaputra Board is underway. | | 9.4 Restructuring of Brahmaputra Board | The agenda item was discussed at a length and agreed upon during the meeting, High Powered Review Board (HPRB) approved restructuring of Brahmaputra Board and advised that funds should be largely spent for works and funds for establishment costs including salary and wages have to be optimised. | All efforts are made to reduce the establishment expenditure. During the Year 2018-19 out of total expenditure of Rs. 159.93 crore, Rs. 61.97 crore is for regular establishment and Rs. 97.96 crore for works. Further, during the year 2019-20 total expenditure upto September, 2019 is Rs. 77.72 crore out of which Rs. 26.43 crore is for regular establishment and Rs. 51.29 crore for works. | | 9.5 Dropping the proposal to convert Brahmaputra Board into an Authority or Corporation: | HPRB agreed to drop the proposal to convert Brahmaputra Board into an Authority or Corporation keeping in view of the approval of President of India communicated vide order No: A.60015/24/20017-E.III/1085-1094 dated 25.10.2017 | Minutes circulated to Ministry. | | 9.6 Assigning the work for Development of Infrastructure of Brahmaputra Board at | HPRB approved the establishment of Brahmaputra Board office complex in Majuli and handing over the works to NPCC, a PSU under the Ministry. | Work allotted to NPCC. | | Agenda No. | Decision taken during 9th meeting | Action Initiated | |-----------|----------------------------------|-----------------| | Majuli to NPCC | 9.7 Assigning some works to WAPCOS for investigation and preparation of DPR of Multipurpose Projects | HPRB approved that the preparation of Master Plans, Feasibility Reports and DPRs for development of the complete Brahmaputra basin be also given to WAPCOS, a PSU under the Ministry, who may partner with leading global organizations working in the field of integrated water management. It is underway to carry out the remaining works of Survey & Investigation and DPR preparation of Simsang and Jiadhal Dam Project through WAPCOS. | 10.3 Measures for protection of Majuli Island LOCATION: The Majuli Island is located to the north of Jorhat town at a distance of about 20km and lies between latitudes 26045/ N to 270 10/ N and longitudes 93040/ E to 94035/ E. The word Majuli is claimed to be derived from the word ‘MAJULI’ meaning an area surrounded by water. Majuli Island is the nerve Centre of Vaishnavite culture developed during the unique Vaishnava Satra system founded by the great saint Srimanta Sankardeva in the 15th Century. It is the cultural heritage Centre of Vaishnavite culture of Assam. At present there are 22 Satras in Majuli. Hordes of tourists and devotees throng Majuli every year for its uniqueness and Vaishnavite culture making it an important spot in the tourist circuit of Assam. Also, Majuli Island is a serious contender for inclusion in the list of World Heritage Site. During 2016, Majuli has been upgraded from a Civil Sub-Division to the status of a District. Protection of Majuli Island from the menace of floods and erosion by Brahmaputra river is thus one of the prime objectives of the State Government, with support from the Centre. The average elevation of the Island is 87 m (at Bessamara) above mean sea level as against the High Flood Level of 88.32 m. The present area of Main Island is about 524 sq.km with a population of 1.68 lakh as per 2011 Census. Majuli island has been under severe threat of bank erosion by the flow of river Brahmaputra since formation of the Island and particularly after the Assam earthquake of 15th August, 1950. Brahmaputra Board was entrusted the work of protection of Majuli Island from floods and erosion in 2004. Since then, Board completed execution of Immediate Measures, Phase-I, Emergent Works and Phase - II & III works. About 22.08 sq.km of land has been reclaimed and secured by Brahmaputra Board during the period 2004 to 2016. To address the issues of erosion in vulnerable reaches and reclaim more land by pro-siltation and other measures in the east-west reach length of about 80 km on south bank, a DPR was formulated for “Protection of Majuli Island from flood and erosion of river Brahmaputra” as per the recommendations of the Standing Committee of Experts for Majuli Island and Technical Advisory Committee of Brahmaputra Board (TAC- BB). An SFC of Rs. 233.57 Cr. for the above work has been approved by Government of India. Out of Rs. 233.57 Cr, the Ministry of DoNER funded Rs. 207 crore under NLCPR. Work amounting to Rs. 160.75 crore (Rs. 189.69 crore including GST) was allotted in November 2017. Actual work started at site since February 2018. The major works under execution: - Bank revetment with geo bags filled with earth / sand for a reach length of 27 km in 14 locations. - RCC porcupine works in 41 locations Physical progress of the scheme up to September, 2019 is 65.13% with an expenditure of Rs.112.59 crore. On recommendation of TAC-BB visited the work sites at Majuli Island on 30.04.2019 additional works and extra item of work amounting to Rs. 5.09 crore are under execution. The TAC-BB is also visiting Majuli from 09.10.2019 to 11.10.2019. Repairing and maintenance of the Spurs constructed under phase- II & III is also continued. ### Restructuring of Brahmaputra Board and Proposal for setting up NEWMA The 63rd Special Meeting of Brahmaputra Board was held at the Brahmaputra Board HQ, Guwahati on 11th April, 2017 to discuss proposals on restructuring of Brahmaputra Board. Ministry of Water Resources, RD & GR, Government of India proposes to restructure Brahmaputra Board within the ambit of the Brahmaputra Board Act, 1980 by revamping its technical and non-technical cadres. The Government of India approved the proposal on 10.01.2019 with a regional office headed by Deputy Chief Engineer or Superintending Engineer in each State under the Jurisdiction of Board and modified cadre structure of the Board. Regional offices at Itanagar, Siliguri, Assam and Nagaland have been opened. Recruitment Regulation for all Groups of posts under Board has been modified as per Restructuring Order and notified in the Gazette of India. For filling up of posts as per restructuring is underway. **NEWMA:** As a follow up of the recommendations of the High Level Committee for "Proper Planning & Management of Water Resources in the North Eastern Region of India" setup by NITI Aayog, setting up of North Eastern Water Management Authority (NEWMA) is under active consideration of Government of India. ### 10.5 New Initiatives- **(A) Setting up of North East Water Resources Data Sharing Centre** As per recommendations of Seminar organized on 15.09.2018 by Brahmaputra Board at Assam Administrative Staff College, Guwahati (wherein 74 members of 37 organisations working in North Eastern Region participated) and thereafter as per decision of 67th meeting of Brahmaputra Board, a committee headed by Vice-Chairman, Brahmaputra Board was constituted to give a report on sharing aspects of data and to fill up the gaps in hydro-meteorological data collection in the jurisdiction of Brahmaputra Board. On the basis of recommendations of the report of the Committee, a proposal for setting-up of North East Water Resources Data Sharing Centre is underway. **(B) Scientific dissemination of Ground Water Management practices of Apatoni’s inhabited Ziro Valley in Arunachal Pradesh, Chakhesang tribe of Phek district in Nagaland, Bodo tribe of Baksa district in Assam** The Apatani inhabited Ziro Valley in Arunachal Pradesh, people are practicing a traditionally evolved land based resources management and conservation system which is very unique in the Himalayan region. They have considerable expertise in land and water resources management. Integration of pisciculture in wet rice cultivation is a distinct characteristic of Apatani agro-ecosystem. This eco-friendly water management practices of Apatani farmers is required to be popularized among other parts of the NE region having similar topography, geological characteristics and micro-climate. Brahmaputra Board being a basin management organization would like to explore the possibility to popularize traditional but good water management practices of NE Region in association with the North Eastern Regional Institute of Water and Land Management (NERIWALM), Tezpur which is also under the Ministry of Jal Shakti, Govt. of India. In this context, a meeting was held at Brahmaputra Board on 7\textsuperscript{th} August, 2019 for obtaining the information on the methodologies and to work out the possible strategy for popularization of the traditional water management practices of NE Region. A Pilot Project is now being taken up by Brahmaputra Board (BB) in association with NERIWALM as 1\textsuperscript{st} Phase in three years. In the first phase of the pilot study the traditional practices of Chakhesang tribe of Phek district in Nagaland, Bodo tribe of Baksa district in Assam in addition to Ziro Valley, and one new area in Arunachal Pradesh other than Ziro Valley having similar geographic conditions will be prioritized. The approach may also include documentation of earlier work and present studies, Rapid Participatory Appraisal (Format, field survey, data analysis and identification of gap), organization of basic, design and lesson learnt workshops, participatory planning, participatory execution & evaluation in a Participatory Action Research mode. A Core Group was also constituted with Director, NERIWALM and Secretary, BB as Co-chairmen, representatives from ICAR-RC, Umiam, Assam Agricultural University (AAU), Jorhat, Engineers & Consultant (F&P) of BB, Krishi Vigyan Kendra (KVK) of Lower Subansiri district, Faculty members of NERIWALM, State Nodal Officer of Directorate of Agriculture, Water Resources Department of Arunachal Pradesh, Directorate of Horticulture and Agriculture of Nagaland and official to be nominated by BTAD Authority etc. as members to work out the modalities for implementing this project. ### 10.6 Soft Measure for Flood and Erosion Management Brahmaputra is a highly braided river and unstable in its entire reach in Assam and causes large scale erosion along its banks. Erosion has been the single most important issue in the Basins of Brahmaputra and Barak besides floods. The problem of flood and erosion has not been only limited to the mighty river Brahmaputra but has also been a regular phenomenon in case of major tributaries of Brahmaputra. To give focus to this aspect, Brahmaputra Board organized a seminar on February, 2019 which was participated by 45 organizations related to North Eastern Region. The seminar was inaugurated by Principal Secretary to the Prime Minister, Dr. Pramod Kumar Mishra. How to manage soil erosion in North Eastern Region was the theme of deliberation. Despite of the various studies, actions and deliberations, yet the phenomenon of flood and erosion has defied quick and effective solutions. The problem of flood and erosion has so far been attempted to contain primarily through structural measures which are normally known as “Hard Measures” and have provided reasonable degree of protection. However, many a time, the efficacies of such structures have been questioned in different forums. It is now felt necessary to adopt/introduce vegetative measures/or biological measures which is being talked about as one of the viable solution to the problem of soil erosion. The biological measures include plantation of different types of trees, shrubs, grasses and other vegetation as a long term strategy for controlling river bank erosion. Considering their low cost, sustainability and acceptability from the environmental point of view, such biological measures are termed as “Soft Measures”. It is felt necessary that soft measures either in isolation or in combination with structural measures depending upon the prevailing site condition may be good for containing the problems of bank erosion and help in maintaining ecological balances. Option for combining both “hard and soft measures” which may be termed as Bio-engineering method are getting due importance in recent times, also need to be tested for its effectiveness in controlling river bank erosion. Bio-engineering is defined as the use of structural elements in combination of biological elements or plants to prevent erosion. It is helpful in many ways like, protection of surface soil erosion due to various elements like air, water and also from other natural calamities. In bio-engineering method use of locally available materials for structure and vegetation are normally given priority. Brahmaputra Board has proposed to take up some pilot project in collaboration with IIT, Guwahati to ascertain the effectiveness of bio-engineering model on river bank of Brahmaputra. Two identified sites viz. (i) Right bank downstream of Kordoiguri of river Brahmaputra and (ii) Right bank at Dakhinpat at Majuli Island of river Brahmaputra have been chosen for preparation of the pilot project. 10.7 Any other items with permission of Chairman 10.8 Deciding venue and date for next meeting सं: BB/5334/2019/2864-2882 जल शक्ति मंत्रालय जल संसाधन, नदी विकास एवं गंगा संरक्षण विभाग Ministry of Jal Shakti Department of Water Resources, River Development and Ganga Rejuvenation ब्रह्मपुत्र बोर्ड Brahmaputra Board बशिष्ठ, गुवाहाटी-29 दिनांक: अक्टूबर 14, 2019 सेवा में, सभी सदस्यगण ब्रह्मपुत्र बोर्ड की उच्चाधिकार प्राप्त समीक्षा बोर्ड (संलग्न सूची के अनुसार) विषय: ब्रह्मपुत्र बोर्ड की उच्चाधिकार प्राप्त समीक्षा बोर्ड की 10वीं बैठक महोदय, मुझे यह कहने का निदेश हुआ है कि ब्रह्मपुत्र बोर्ड की उच्चाधिकार प्राप्त समीक्षा बोर्ड की 10वीं बैठक दिनांक 8 नवंबर, 2019 को पूर्वाह्न 11:30 बजे माननीय केंद्रीय जल शक्ति मंत्री श्री गजेन्द्र सिंह शेखावतजी की अध्यक्षता में गुवाहाटी में आयोजित की जाएगी। ब्रह्मपुत्र बोर्ड की उच्चाधिकार प्राप्त समीक्षा बोर्ड के सभी माननीय सदस्यगण को अनुरोध है कि कृपया वे उक्त बैठक में उपस्थित होने का कष्ट करें। बैठक की कार्यसूची बिंदु संदर्भ के लिए संलग्न है। बैठक की कार्यसूची नोट अलग से भेजी जा रही है। अवदीय, [Signature] (विष्णु देव राय) सचिव संलग्न: यथोक्त प्रति सूचना के लिए प्रेषित: 1. माननीय पूर्वांतर सेवा विकास सच्च्य मंत्री 2. अध्यक्ष, ब्रह्मपुत्र बोर्ड के बशिष्ठ, गुवाहाटी -29 प्रधान निजी सचिव To, All Members of High Powered Review Board of Brahmaputra Board (As per list enclosed) Sub: 10th meeting of High Powered Review Board of Brahmaputra Board Sir, I am directed to inform that the 10th meeting of High Powered Review Board of Brahmaputra Board is scheduled to be held under the Chairmanship of Hon'ble Union Minister of Jal Shakti Shri Gagendra Singh Sekhawat at 11:30 Hours on 8th November, 2019 at Guwahati. All members of the High Powered Review Board of Brahmaputra Board are requested to kindly make it convenient to attend the meeting. The Agenda Points for the meeting is enclosed for reference. The Agenda Note for the meeting is being sent separately. Yours faithfully, Enclo; As above Copy for kind information to: 1. Hon'ble Minister of State for Development of North Eastern Region 2. उपाध्यक्ष, ब्रह्मपुत्र बोर्ड के प्रधान निजी सचिव, बशिष्ठ, गुवाहाटी -29
Selective inhibition of chymotrypsin-like activity of the immunoproteasome and constitutive proteasome in Waldenström macroglobulinemia Aldo M. Roccaro,1 Antonio Sacco,1 Monette Aujay,2 Hai T. Ngo,1 Abdel Kareem Azab,1 Feda Azab,1 Phong Quang,1 Patricia Maiso,1 Judith Runnels,1 Kenneth C. Anderson,1 Susan Demo,2 and Irene M. Ghobrial1 1Medical Oncology, Dana-Farber Cancer Institute and Harvard Medical School, Boston, MA; and 2Onyx Pharmaceuticals, Emeryville, CA Proteasome inhibition represents a valid antitumor approach and its use has been validated in Waldenström macroglobulinemia (WM), where bortezomib has been successfully tested in clinical trials. Nevertheless, a significant fraction of patients relapses, and many present toxicity due to its off-target effects. Selective inhibition of the chymotrypsin-like (CT-L) activity of constitutive proteasome 20S (c20S) and immunoproteasome 20S (i20S) represents a sufficient and successful strategy to induce antineoplastic effect in hematologic tumors. We therefore studied ONX0912, a novel selective, irreversible inhibitor of the CT-L activity of i20S and c20S. Primary WM cells express higher level of i20S compared with c20S, and that ONX0912 inhibited the CT-L activity of both i20S and c20S, leading to induction of toxicity in primary WM cells, as well as of apoptosis through c-Jun N-terminal kinase activation, nuclear factor κB (NF-κB) inhibition, caspase cleavage, and initiation of the unfolded protein response. Importantly, ONX0912 exerted toxicity in WM cells, by reducing bone marrow (BM)–derived interleukin-6 (IL-6) and insulin-like growth factor 1 (IGF-1) secretion, thus inhibiting BM-induced p-Akt and phosphorylated extracellular signal-related kinase (p-ERK) activation in WM cells. These findings suggest that targeting i20S and c20S CT-L activity by ONX0912 represents a valid antitumor therapy in WM. (Blood. 2010;115(20):4051-4060) Introduction The multicatalytic ubiquitin-proteasome pathway plays an important role in the targeted degradation of a wide spectrum of proteins involved in the regulation of several cellular processes responsible for the maintenance of cellular homeostasis. The 26S constitutive proteasome has been identified in the majority of cell types. It consists of a 20S central core that exerts a proteolytic activity, and two 19S particles that represent the regulatory part of the complex. The 19S particles include 6 adenosine triphosphatase subunits that are responsible for the denaturation of target proteins and for the delivery of the substrate into the proteolytic 20S core. The 20S core contains 3 different catalytic activities known as chymotrypsin-like (CT-L), trypsinlike (T-L), or caspaselike (C-L), which are encoded by the β5, β2, and β1 subunits, respectively. Similarly, cells of hematopoietic origin express the immunoproteasome, which retains the structural subunits of the constitutive proteasome but exerts its enzymatic activities through the catalytic subunits low-molecular-mass polypeptide 7 (LMP7), mutational analysis of subunit β2 (MECL1), and LMP2, which form the 20Si core. Specific targets for proteasome degradation include several proteins involved in cell-cycle regulation, cell proliferation, programmed cell death, and stress response. These findings validate targeting the proteasome for cancer therapy. Indeed, a wide spectrum of compounds, both natural and synthetic, has been identified as proteasome inhibitors, with bortezomib being the first proteasome inhibitor to enter clinical trials and receive approval for the treatment of patients with multiple myeloma (MM). A recent report has shown that B-cell malignancies are characterized by a preferential expression of the immunoproteasome 20S (i20S), and that a selective inhibition of the CT-L activity of the proteasome, both at the constitutive and immunoproteasome level, is sufficient to exert an antineoplastic effect in hematologic malignancies. In the present study, we show that primary Waldenström macroglobulinemia (WM) cells are characterized by higher expression of the i20S immunoproteasome subunits compared with constitutive proteasome 20S (c20S) subunits and that they contain a higher i20S content compared with normal CD19+ B cells. We therefore investigated for the first time the antitumor activity of the novel orally bioavailable and selective peptide epoxyketone proteasome inhibitor ONX0912 in WM. Our findings demonstrate that ONX0912 inhibits the chymotrypsin-like activity of both the immunoproteasome (LMP7) and the constitutive proteasome (β5) in WM cells, leading to induction of cytotoxicity in primary WM cells, as well as of programmed cell death in a caspase-dependent and -independent manner, as shown by activation of c-Jun N-terminal kinase, inhibition of nuclear factor κB (NF-κB), and initiation of the unfolded protein response. Importantly, ONX0912 exerted cytotoxicity in WM cells, even in the context of bone marrow milieu. We also showed that combination of ONX0912 and bortezomib acted synergistically in inhibiting the i20S and c20S CT-L activities, NF-κB activity, and caspase and poly(adenosine diphosphate-ribose) polymerase (PARP) cleavage, thus inducing synergistic cytotoxicity in WM cells. Taken together, these findings provide... the preclinical rationale for testing ONX0912 in Waldenström macroglobulinemia. Methods Cells Primary WM cells were obtained from bone marrow (BM) samples from previously treated WM patients using CD19+ microbead selection (Miltenyi Biotec) with more than 90% purity, as confirmed by flow cytometric analysis with monoclonal antibody reactive to human CD20-PE (BD Biosciences). The WM and immunoglobulin M (IgM)-secreting low-grade lymphoma cell lines (BCWM.1, MEC.1, RL) were used in this study. Peripheral blood mononuclear cells (PBMCs) were obtained from healthy subjects by Ficoll-Hypaque density sedimentation, and CD19+ selection was performed as described. All cells were cultured at 37°C in RPMI-1640 containing 10% fetal bovine serum (Sigma Chemical), 2mM L-glutamine, 100 U/mL penicillin, and 100 μg/mL streptomycin (GIBCO). Approval for these studies was obtained from the Dana-Farber Cancer Institute Institutional Review Board. Informed consent was obtained from all patients and healthy volunteers in accordance with the Declaration of Helsinki protocol. Reagents ONX0912 (formerly PR-047) and proteasome active site binding protein (PABP) (both provided by Onyx Pharmaceuticals) were diluted in dimethyl sulfoxide, stored at 4°C, and diluted in culture medium immediately before use. The maximum final concentration of dimethyl sulfoxide (<0.1%) did not affect cell proliferation and did not induce cytotoxicity on the cell lines and primary cells tested (data not shown). Bortezomib was obtained from Hospital Pharmacy. The c-Jun NH2 kinase (JNK) inhibitor SP600215 was purchased from Calbiochem. Salubrinal was purchased from Axxora. The pan-caspase inhibitor Z-VAD-fmk was purchased from Promega. Recombinant interleukin-6 (IL-6) and insulin-like growth factor 1 (IGF-1) were purchased from R&D Systems. Growth inhibition assay The inhibitory effect of ONX0912 on the growth of WM cells, IgM-secreting cell lines, and primary cells was assessed by measuring 3-(4,5-dimethythiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT; Chemicon International) dye absorbance, as previously described. DNA synthesis DNA synthesis was measured by [3H]-thymidine (Perkin Elmer) uptake, as previously described. Flow cytometric analysis Cell-cycle analysis was profiled by flow cytometry using propidium iodide staining (5 μg/mL Sigma Chemical) after 24-hour culture with PR-047, as described. DNA fragmentation assay Cell Death Detection enzyme-linked immunosorbent assay (ELISA; Roche Applied Science) was used to quantitate DNA fragmentation per the manufacturer’s instructions. Immunoblotting WM and IgM-secreting cell lines were harvested and lysed using lysis buffer (Cell Signaling Technology) reconstituted with 5mM NaF, 2mM Na3VO4, 1mM polymethylsulfonyl fluoride, 5 μg/mL leupeptine, and 5 μg/mL aprotinin. Whole-cell lysates (50 μg/lane) were subjected to sodium dodecyl sulfate–polyacrylamide gel electrophoresis and transferred to polyvinylidene fluoride membrane (Bio-Rad Laboratories). The antibodies used for immunoblotting included anti-phospho-Akt (p-Akt; Ser473), -Akt, -phosphorylated extracellular signal-related kinase 1/2 (p-ERK1/2; Thr202/Tyr204), -ERK1/2, -caspase-3, -caspase-8, -caspase-9, -PARP, -β-catenin, -glucose-regulated protein of 94 kDa (-GRP94), -protein kinase-like endoplasmic reticulum kinase (-PERK), -phosphorylated eukaryotic initiation factor 2 (-p-eIF2α), -binding immunoglobulin protein (-BiP), -protein disulfide isomerase (-PDI), -activating transcription factor 6 (-ATF6), -phosphorylated mitogen activated protein kinase kinase 7 (-p-MKK7), -phosphorylated stress-activated protein kinase (p-SAPK)/JNK, -p27Kip1, -p21Cip1, -cyclin D1, -cyclin D2, -cyclin E, -nucleolin, -p-p65, -p65, -p105, -p50, -p100, -p52, -RelB, -phosphorylated inhibitor of κB (p-IκB; Cell Signaling Technology); -α-tubulin, and -β-actin antibodies (Santa Cruz Biotechnology). IL-6 and IGF-1 detection IL-6 and IGF-1 concentrations were quantified by ELISA (Quantikine human IL-6 ELISA; Quantikine human IGF-1 ELISA; R&D Systems) according to the manufacturer’s instructions. Proteasome constitutive immunosubunit ELISA assay The proteasome constitutive immunosubunit ELISA was performed as previously described. Briefly, human constitutive proteasome (c20S) and immunoproteasome 20S (i20S) subunits (Boston Biochemical); monoclonal antibodies anti-β1, -β2, -LMP7, -LMP2 (BioMol International), -MECL1 (Santa Cruz Biotechnology), and -β5 (Covance custom product); and horseradish peroxidase-conjugated antibodies (Jackson ImmunoResearch Laboratories and Zymed) were used. Baseline expression of each c20S and i20S subunit, and their modulation upon ONX0912 treatment, was tested on cell lysates prepared by incubating cell pellets in TE buffer (20mM tris(hydroxymethyl)aminomethane, pH 8.0, 5mM ethylenediaminetetraacetic acid). Cell lysates were then incubated with PABP (5μM) for 2 hours at 25°C. Samples were denatured with 8M guanidine hydrochloride (Fisher Scientific) and subunits bound to PABP were captured with streptavidin-conjugated sepharose beads (GE Healthcare). Individual subunits were probed with antibodies specific to each subunit. Each subunit was measured as nanograms per microgram of total protein, according to the SuperSignal ELISA Pico Kit. 20S proteasome activity The chymotrypsin-like activity of the 20S proteasome of primary WM tumor cells was determined by measurement of fluorescence generated from the cleavage of the fluorogenic substrate suc-LLVY-amc, as described. NF-κB activity NF-κB activity was investigated using the Active Motif TransAM kits, a DNA-binding ELISA-based assay (Active Motif North America). Briefly, BCWM.1 cells were treated with ONX0912 (10nM) or bortezomib (10nM) alone or in combination for 4 hours, and stimulated with tumor necrosis factor-α (TNF-α, 10 ng/mL) during the last 20 minutes of culture. NF-κBp65 transcription factor binding to its consensus sequence on the plate-bound oligonucleotide was studied from nuclear extracts, following the manufacturer’s procedure, as described. Effect of PR-047 on paracrine WM cell growth in the BM To evaluate growth stimulation and signaling in WM cells adherent to bone marrow stromal cells (BMSCs), $3 \times 10^4$ BCWM.1 cells were cultured in BMSC-coated 96-well plates for 48 hours in the presence or absence of ONX0912. DNA synthesis was measured as described. Statistical analysis Statistical significance of differences in drug-treated versus control cultures was determined using Student t test. The minimal level of significance was P value less than .05. Drug synergism was analyzed by isobologram analysis using the CalcuSyn software program (Biosoft), as described. Results WM primary cells are characterized by higher expression of the immunoproteasome The 20S proteolytic cores of the constitutive proteasome and immunoproteasome have 3 different enzymatic activities: chymotrypsin-like (CT-L), trypsinlike (T-L), and caspase-like (C-L), which are encoded by the β5, β2, and β1 subunits, respectively, and by the LMP7, MECL1, and LMP2 subunits, respectively. We first examined the expression level of each 20S core subunit of the constitutive proteasome and immunoproteasome, in primary bone marrow-derived CD19+ WM cells, WM, and low-grade lymphoma IgM-secreting cell lines. Peripheral blood-derived CD19+ cells were used as normal controls. We found that primary tumor CD19+ bone marrow-derived WM cells have significantly higher level of the immunoproteasome compared with the constitutive proteasome (Figure 1A). Importantly, WM primary cells were characterized by significantly higher proteasome subunit expression compared with their normal cellular counterpart (Figure 1A). Similar results were confirmed in BCWM.1 cells, as well as other low-grade lymphoma IgM-secreting cells, such as MEC.1 and RL (Figure 1B). We next evaluated the activity of ONX0912, a selective CT-L inhibitor, in targeting the CT-L activity in WM cells. Cells were treated with increasing concentrations of ONX0912 (2.5-50nM) for 2 hours, and exhibited significant inhibition of the CT-L subunits of both constitutive proteasome (β5) and immunoproteasome (LMP7) (Figure 1C) in a dose-dependent manner, with minimal inhibition of the trypsin T-L and caspase activities (Figure 1D-E), suggesting that the selectivity of ONX0912 for the CT-L activity of the proteasome together with the weak activity on other protease classes may contribute to a better tolerability in vivo. Importantly, ONX0912-induced inhibition of the CT-L proteasome activity was confirmed in primary CD19+ WM cells (Figure 1F). ONX0912 exerts antitumor activity in WM cells and other IgM-secreting low-grade lymphoma cells The efficacy of ONX0912-dependent proteasome inhibition in targeting clonal IgM-secreting cells was tested in primary WM CD19+ cells, normal PBMC-derived CD19+ cells, and WM and IgM low-grade lymphoma cell lines (BCWM, RL, MEC.1). We first evaluated the cytotoxic effect of ONX0912 (5-250nM) on primary WM bone marrow-derived CD19+ cells by MTT assay, and found that ONX0912 induced cytotoxicity in a dose-dependent manner (median inhibitory concentration [IC_{50}]: 50-100nM; Figure 2A). Similar results were confirmed on WM and IgM-secreting low-grade lymphoma cell lines, where ONX0912 induced a dose-dependent cytotoxicity (IC50: 50nM, at 48 hours; Figure 2B-C). In contrast, ONX0912 did not exert cytotoxicity on normal PBMC-derived CD19+ cells isolated from 4 healthy volunteers (Figure 2D). We have previously shown that ONX0912-dependent... inhibition of CT-L activity occurs after 2 hours, whereas induction of cytotoxicity has been observed at 24 hours, with an increasing effect at 48 and 72 hours. Given the irreversible nature of the proteasome inhibition exerted by ONX0912 in WM and IgM-secreting low-grade lymphoma cell lines, this could potentially indicate that cell death is either a delayed effect of inhibited CT-L activity, or that it depends on other deferred mechanisms. We therefore performed a washout experiment, where cells where treated with ONX0912 for 2 hours, and subsequently washed and replaced with fresh medium in absence or ONX0912. We found that there is induction of cytotoxicity, indicating that ONX0912-dependent cytotoxicity may result from both CT-L proteasome inhibition and delayed effects due to other ONX0912-induced mechanisms (Figure 2E). We next demonstrated that ONX0912 induced apoptosis in a dose-dependent manner, as assessed by DNA fragmentation (Figure 2F). Similar effects were obtained on other IgM-secreting low-grade lymphoma cell lines (RL, MEC.1; Figure 2F). We also examined the molecular mechanisms whereby ONX0912 induces cytotoxicity in WM, and demonstrated that ONX0912 induced caspase-8, -9, and -3 and PARP cleavage in a dose-dependent manner (Figure 2G). It is known that proteasome inhibition eradicates tumor cells, partly by initiating the unfolded protein response (UPR), a signaling cascade activated by the accumulation of misfolded proteins in the endoplasmic reticulum (ER).\textsuperscript{11} Previous reports indicate that induction of ER stress in WM cells may represent a valid therapeutic option in WM.\textsuperscript{12} We therefore sought to investigate the effect of ONX0912 in modulating the expression of unfolded protein response (UPR) components in WM cells as one of the mechanisms of cytotoxicity in WM cells. We found that ONX0912 induced accumulation of β-catenin, consistent with a previous report.\textsuperscript{13} In addition, ONX0912 induced up-regulation of UPR components such as GRP94 and PERK, followed by PERK-dependent phosphorylation of EIF-2α. Consistent with terminal UPR induction by ONX0912, ATF4 protein level was increased in WM ONX0912-treated cells. We observed a ONX0912-dependent down-modulation of PDI and BiP, at 12 hours (Figure 2G), and hypothesized that early exposure time (12 hours) induces down-modulation of BiP and PDI, leading to reduced cell survival and induction of apoptosis in the treated cells; however, longer exposure (48 hours) could result in up-regulation of BiP and PDI due to induction of UPR. We therefore treated cells with ONX0912 (10-50nM) for 12 and 48 hours, and cell lysates were subjected to Western blot. We found down-modulation and up-regulation of PDI and BiP, at 12 and 48 hours, respectively. In parallel, p-EIF2α protein expression increases upon ONX0912 treatment, at either 12 or 48 hours (Figure 2H). These findings suggest that ONX0912 first induces down-modulation of BiP/PDI, resulting in reduced cell survival and induction of apoptosis, which may be independent of pEIF2α modulation, whereas at longer treatment exposure, ONX0912 induces UPR, as demonstrated by increased pEIF2α, together with up-regulation of BiP and PDI. To better delineate the role played by caspases and ER stress in ONX0912-induced cytotoxicity, WM cells were exposed to the pan-caspase inhibitor Z-VAD-fmk (25-50μM), or the ER stress–induced apoptosis protector salubrinal (5-10μM), in the presence or absence of ONX0912 (25-50-100nM). We found that Z-VAD-fmk did not totally overcome ONX0912-induced cytotoxicity (supplemental Figure 1A, available on the Blood Web site; see the Supplemental Materials link at the top of the online article). Similar results were obtained in presence of salubrinal (supplemental Figure 1B). We next tested the protective effect of Z-VAD and salubrinal in ONX0912-treated cells and found that this combination does not completely rescue cells from ONX0912-induced cell toxicity (supplemental Figure 1C), indicating that ONX0912 triggers apoptosis also through other mechanisms different from caspase activation or ER stress modulation. We therefore sought to determine other mechanisms as regulators of ONX0912-induced cytotoxicity. We showed that ONX0912 treatment also triggered MKK7-induced c-Jun N-terminal kinase (JNK) activation in WM cells, as shown by up-regulation of phosphorylated MKK7 (p-MKK7) and p-JNK1/2 (Figure 3A). To better define the role of JNK activity in mediating ONX0912-induced WM cytotoxicity, WM cells were treated with ONX0912 in presence or absence of the JNK inhibitor SP600125. ONX0912 (25-50nM)–induced cytotoxicity was inhibited upon SP600125 treatment (Figure 3B), together with an inhibition of ONX0912-dependent caspases-3, -8, and -9 and PARP cleavage (Figure 3C). **ONX0912 inhibits proliferation in WM and IgM-secreting low-grade lymphoma cells** We next examined the effect of ONX0912 on reducing WM cell proliferation by targeting cell cycle profiling. WM and IgM-secreting cell lines were cultured for 24, 48, or 72 hours in the presence of ONX0912 (1-500nM). ONX0912 inhibited BCWM.1 proliferation in a dose-dependent manner, as measured by \[^{3}H\]-thymidine uptake assay, with an IC\(_{50}\) of 37.5nM at 48 hours (Figure 4A). ONX0912 demonstrated similar activity on all cell lines tested, with IC\(_{50}\) of 50nM at 48 hours (Figure 4B). The effect of ONX0912 in modulating cell-cycle progression was evaluated using propidium iodide staining and flow cytometric analysis in WM cells cultured in absence or presence of ONX0912 (10, 20, 50nM). We found that ONX0912, in a dose-dependent manner, induced G\(_1\) cell-cycle arrest and a concomitant reduction of cells in S phase. Specifically, G\(_1\)-phase cells increased from 51% in the untreated setting to 53%, 60%, and 78% in those cells exposed to ONX0912 10, 20, and 50nM, respectively. Similarly, S-phase cells decreased from 35% in the control to 33%, 28%, and 14% in those cells exposed to ONX0912 10, 20, and 50nM, respectively (Figure 4C). ONX0912-induced G\(_1\)/S-phase transition arrest was supported by the down-regulation of positive cell-cycle regulators, such as --- **Figure 3. ONX0912-induced apoptosis is partially mediated by activation of JNK.** (A) BCWM.1 cells were cultured with ONX0912 (2.5-50nM) for 12 hours. Whole-cell lysates were subjected to Western blotting using anti-p-MKK7, –p-SAP/JNK, and -actin antibodies. (B) BCWM.1 cells were cultured with ONX0912 (20nM, 50nM) in presence or absence of the JNK inhibitor SP600125 (10µM) and cytotoxicity was assessed by MTT assay. (C) BCWM.1 cells were cultured with ONX0912 (20nM, 50nM), in presence or absence of SP600125 (10µM) for 12 hours. Whole-cell lysates were subjected to Western blotting using anti-p-SAPK/JNK, -PARP, –caspase-9, –caspase-3, –caspase-8, and –β-actin. In all panels, error bars represent SD. --- **Figure 4. ONX0912 exerts antiproliferative effects on primary WM cells as well as on IgM-secreting low-grade lymphoma cells.** (A-B) DNA synthesis was measured by thymidine uptake assay in BCWM.1 (A) and IgM-secreting cell lines, RL and MEC.1 (B), and treated with ONX0912 (1-500nM) for 24, 48, and 72 hours (A) or for 48 hours (B). (C) BCWM.1 cells were treated with ONX0912 (10-50nM) for 24 hours, and cell-cycle profiling was performed by propidium iodide staining and flow cytometric analysis. (D) BCWM.1 cells were cultured with ONX0912 (10-20-50nM) for 12 hours. Whole-cell lysates were subjected to Western blotting using anti-cyclin D1, –cyclin D2, –cyclin E, -p21\(^{ras}\)kip1, -p27kip1, and –β-actin. cyclin D1, cyclin D2, cyclin E, and by the up-regulation of negative cell-cycle regulators, such as p21\textsuperscript{waf1/cip1} and p27\textsuperscript{kip1} (Figure 4D). **ONX0912 inhibits the canonical and noncanonical NF-κB pathways** NF-κB pathway plays a pivotal role in regulating growth and survival of plasma cell malignancies, and inhibition of NF-κB represents one of the mechanisms of action for proteasome inhibitors.\textsuperscript{14} We therefore sought to investigate whether ONX0912 could target this pathway. We first investigated the effect of ONX0912 on the p65NF-κB DNA binding activity, studying nuclear extracts from treated cells using the Active Motif assay. We showed that TNF-α treatment induced NF-κB recruitment to the nucleus in BCWM.1 cells, which was inhibited by ONX0912 in a dose-dependent manner (Figure 5A). Moreover, immunoblotting from nuclear extracts demonstrated that p65 phosphorylation and p50/p105NF-κB expression were inhibited by ONX0912 (Figure 5B). We next examined whether ONX0912 could target the noncanonical NF-κB pathway. Immunoblotting of nuclear extracts showed that ONX0912 inhibited the expression of p100/p52 and RelB, which are mostly activated through the noncanonical pathway (Figure 5B).\textsuperscript{15} We next investigated the role of ONX0912 on the expression of NF-κB negative regulator IkB in the cytoplasmic compartment, and found that ONX0912 up-regulated its expression (Figure 5C). Taken together, these data demonstrate that ONX0912 regulates both canonical and noncanonical pathways of NF-κB in WM cells. **Effect of ONX0912 and bortezomib in inducing WM cell cytotoxicity** ONX0912 exerts a different inhibitory profile on proteasome activities compared with bortezomib.\textsuperscript{16} Previous reports indicate that bortezomib targets mainly the CT-L activity and to a lesser degree the C-L activity,\textsuperscript{17} whereas ONX0912 affects mainly the CT-L,\textsuperscript{16} but the effect of ONX0912 and bortezomib in inhibiting the immunoproteasome and the constitutive proteasome activities has not been evaluated. Moreover, previous reports demonstrate synergism between different classes of proteasome inhibitors.\textsuperscript{17} We therefore sought to determine the effect of ONX0912 and bortezomib, used either as single agents or in combination, in targeting the CT-L activity of both constitutive proteasome and immunoproteasome. WM cells were treated with ONX0912 (10-20nM), bortezomib (2.5-5nM), alone or combination, for 2 hours. ONX0912 showed a significant increase in inhibiting CT-L activity of both the i20S (LMP7; Figure 6A) and the c20S (\( \beta_5 \); Figure 6B) when combined with bortezomib in WM cells. Specifically, ONX0912 (10nM) induced inhibition of the LMP7 activity in 40% of the treated cells, which was increased to 62% and 73% in the presence of bortezomib at 2.5nM (combination index [CI], 0.870) and 5nM (CI, 0.812), respectively, indicating additive effect (Figure 6A). Similar effect was also observed using ONX0912 and bortezomib in targeting the \( \beta_5 \) activity (Figure 6B). We next investigated whether the ONX0912/bortezomib-dependent effect on targeting the CT-L activity could lead to either additive or synergistic induction of cytotoxicity on WM cells. BCWM.1 cells were cultured with ONX0912 (20-50nM) for 48 hours, in the presence or absence of bortezomib (5-10nM). ONX0912 showed significant cytotoxic effects when combined with bortezomib, as demonstrated using MTT assays at 48 hours (Figure 6C). ONX0912 (20nM) induced cytotoxicity in 15.4% of BCWM.1 cells, which was increased to 52.1% and 78.8% in the presence of bortezomib at 5nM (CI, 0.75) and 10nM (CI, 0.58), respectively, indicating additive and synergistic effect, respectively. Similar results were observed when ONX0912 50nM was tested in presence of bortezomib 5nM and 10nM, with CIs of 0.91 and 0.32, respectively. Isobologram analysis, fractions affected, and the combination indexes for each of these combinations are summarized in Figure 6D. To better define the mechanisms of combined ONX0912 plus bortezomib-induced WM cytotoxicity, we investigated the effect of ONX0912 (20-50nM), either alone or in combination with bortezomib 10nM, using immunoblotting after 12 hours of treatment. Interestingly, we demonstrated that PARP and caspase-9, -3, and -8 cleavage were significantly higher using the combination compared with each agent alone (Figure 6E). Finally, we sought to investigate whether the combination of the 2 proteasome inhibitors would lead to synergistic modulation of NF-κB pathway. We first investigated the effect of ONX0912, either alone or in combination with bortezomib, on the NF-κBp65 DNA binding activity, studying nuclear extracts from treated cells using the Active Motif assay. We showed that TNF-α treatment induced NF-κB recruitment to the nucleus in BCWM.1 cells, which was inhibited to a greater extent by ONX0912 than bortezomib, and more significantly by the combination of the 2 proteasome inhibitors (Figure 6F). Moreover, immunoblotting from nuclear extracts demonstrated that p65 phosphorylation was inhibited by ONX0912, either alone or in combination with bortezomib, more than by bortezomib as single agent (Figure 6G). Next, proteins isolated from the cytoplasmic compartment were examined, and we found that the NF-κB inhibitory protein IkB was more significantly increased upon ONX0912 plus bortezomib than single-agent exposure (Figure 6G). Figure 6. Mechanisms whereby ONX0912/bortezomib combination enhances WM cell cytotoxicity. (A-B) BCWM.1 cells were treated with ONX0912 (10nM, 20nM) in presence or absence of bortezomib (2.5nM, 5nM) for 2 hours, and effects on chymotrypsin-like activity (CT-L) of the immunoproteasome (LMP7; A) and constitutive proteasome (β5; B) were evaluated by ELISA on protein lysates. Proteasome activity is expressed as fold of control (untreated cells). CalcuSyn software was used to determine presence or absence of synergism between ONX0912 and bortezomib in targeting the CT-L enzymatic activities. Combination indices (CIs) and fractions affected (FAs) of the combination of ONX0912 and bortezomib and isobolograms are shown below each panel. All experiments were repeated in triplicate. (C) BCWM.1 cells were cultured with ONX0912 (20nM, 50nM) for 48 hours, in the presence or absence of bortezomib (5nM, 10nM). Cytotoxicity was assessed by MTT assay. (D) Representative isobologram of ONX0912 and bortezomib, with the CalcuSyn software demonstrating synergy for the combination. Combination indices (CIs) and fractions affected (FAs) of the combinations of ONX0912 and bortezomib are shown. All experiments were repeated in triplicate. (E) BCWM.1 cells were cultured with ONX0912 (20nM, 50nM) in the presence or absence of bortezomib (10nM) for 12 hours. Whole-cell lysates were subjected to Western blotting using anti-PARP, –caspase-9, –caspase-3, –caspase-8, and –β-actin antibodies. (F) BCWM.1 cells were cultured with either ONX0912 (20nM), bortezomib (10nM), or the combination for 4 hours, and then TNF-α (10 ng/mL) was added for the last 20 minutes. NF-κBp65 transcription factor binding to its consensus sequence on the plate-bound oligonucleotide was studied from nuclear extracts. Wild-type and mutant are wild-type and mutated consensus competitor oligonucleotides, respectively. All results represent means ± SD of triplicate experiments. (G) BCWM.1 cells were cultured with either ONX0912 (20nM), bortezomib (10nM), or the combination for 4 hours, and TNF-α (10 ng/mL) was added for the last 20 minutes. Cytoplasmic and nuclear extracts were subjected to Western blotting using anti–p-NF-κBp65, –NF-κBp100, -nucleolin, –p-IκB, and –α-tubulin antibodies. ONX0912 targets WM cells in the context of bone marrow milieu Because the BM microenvironment confers growth and induces drug resistance in malignant cells, we investigated whether ONX0912 inhibits WM cell growth in the context of the BM milieu. BCWM.1 cells were cultured with ONX0912 (2.5–50nM) in the presence or absence of BMSCs for 48 hours. The viability of BMSCs assessed by MTT was not affected by ONX0912 treatment (data not shown). Using [3H]-thymidine uptake assay, adherence of BCWM.1 cells to BMSCs triggered a 37% increase in proliferation, which was inhibited by ONX0912 in a dose-dependent manner (Figure 7A). The phosphoinositide-3 kinase (PI3K)/Akt pathway is implicated in promoting growth and survival of tumor B cells, including WM. We therefore examined the effect of ONX0912 on Akt activation in WM cells. Previous reports have demonstrated that bortezomib induced up-regulation of p-Akt in MM and WM cells; similarly, ONX0912 up-regulated p-Akt and p-ERK in treated cells (data not shown). Recent reports have indicated that inhibition of the chymotrypsin-like activity of the immunoproteasome (LMP7) inhibits cytokine release. We therefore tested the activity of ONX0912 in modulating p-Akt and p-ERK signaling pathways in WM cells in the context of BM milieu, by determining the effect of ONX0912 in IL-6 and IGF-1 secretion from primary BMSCs and found that ONX0912 reduced secretion of both IL-6 and IGF-1 from the BM milieu in a dose-dependent manner (Figure 7B-C). Because IL-6 and IGF-1 are known to induce Akt and ERK phosphorylation, we next investigated whether ONX0912 could target p-Akt and p-ERK in WM cells in the context of BM microenvironment, as a result of ONX0912-reduced IL-6 and IGF-1 secretion from the BM milieu. BCWM.1 were treated with increasing doses of ONX0912 (2.5–50nM) for 6 hours, in presence or absence of primary BMSCs. The adherence of BCWM.1 to BMSCs, however, induced Akt and ERK phosphorylation in BCWM.1 cells, which was inhibited by ONX0912 in a dose-dependent manner (Figure 7D). These data indicate that ONX0912 may trigger significant antitumor activity against WM cells, even in the presence of the BM milieu. We next measured the efficacy of ONX0912 in inhibiting IL-6 and IGF-1 secretion from primary WM BMSCs, using neutralizing IL-6 and IGF-1 antibodies as controls. We demonstrated that ONX0912-induced inhibition of IL-6 and IGF-1 in BMSC-conditioned medium was comparable with the effect obtained using anti–IL-6– and anti–IGF-1–neutralizing antibodies (Figure 7E-F). We next sought to test the efficacy of ONX0912 in overcoming BMSC-induced growth, compared with the effect obtained upon anti–IL-6– and anti–IGF-1–neutralizing antibodies, and found that ONX0912 inhibited BMSC-induced WM cell growth as effectively as using anti–IL-6– and anti–IGF-1–neutralizing antibodies (Figure 7G), suggesting that ONX0912 may trigger significant antitumor activity against WM cells in the presence of the BM milieu due to its inhibitory effect on IL-6 and IGF-1 secretion from the BM microenvironment. Moreover, addition of exogenous IL-6 and IGF-1 partially blocked ONX0912-dependent cell growth inhibition (Figure 7G). Absence of anti–IL-6– and anti–IGF-1–induced cytotoxicity on BMSCs was observed (data not shown). Discussion WM is characterized by the presence of lymphoplasmacytic cells in the bone marrow (BM) and the secretion of IgM monoclonal protein in the serum, indicating that WM cells present a high protein turnover.\textsuperscript{24,25} Protein metabolism is a tightly regulated process, and inhibition of its turnover may lead to apoptosis in malignant cells, such as with proteasome inhibitors.\textsuperscript{2,26} One of the most extensively studied proteasome inhibitors is bortezomib (Millennium Inc). Bortezomib reversibly inhibits the ubiquitin–26S proteasome pathway, which regulates the turnover of a vast number of intracellular proteins, and has become an exciting target in a variety of malignancies, most notably multiple myeloma.\textsuperscript{27} The proper functioning of this system is crucial for cell cycle regulation, gene transcription, and signal transduction. Based on its activity in multiple myeloma, single-agent bortezomib was tested in WM in phase 2 trials and achieved 40% to 80% responses.\textsuperscript{28} Nevertheless, a significant number of patients develop resistance to therapy or have neurologic toxicity due to its inhibition of nonproteasome targets;\textsuperscript{28} therefore, preclinical evaluation of new proteasome inhibitors is needed to improve patient outcome. Subsequently, a new irreversible, parenterally administered, peptide epoxiketone proteasome inhibitor, carfilzomib, has been developed. Its antitumor activity has been demonstrated \textit{in vitro} in MM,\textsuperscript{29} and it has shown promising activity in a phase 2 clinical trial in patients with relapsed refractory MM.\textsuperscript{30} Recently, ONX0912, a new orally bioavailable analog of carfilzomib with a selective inhibitory effect of the CT-L activity of both immunoproteasome and constitutive proteasome, has been developed to improve dosing flexibility and patient convenience over intravenously administered agents.\textsuperscript{16} Importantly, it has been recently demonstrated that a selective and specific dual inhibition of the CT-L activity of the i2OS (LMP7) and c2OS (β5) represents a sufficient and successful strategy to induce antineoplastic effect in hematologic tumors, as shown by using carfilzomib in T-cell leukemia, Burkitt lymphoma, and multiple myeloma, without causing cytotoxicity in nontransformed cells.\textsuperscript{8} In contrast, inhibition of all the proteasome enzymatic activities induces cytotoxicity in nontransformed cells, which may be responsible for a significant induction of peripheral neuropathy as well as of any other toxicity. Therefore new proteasome inhibitors with a selected CTL inhibitory activity, such as ONX0912, have been developed.\textsuperscript{16} Preclinical pharmacology and in vitro characterization of ONX0912 have demonstrated a favorable toxicologic profile and an irreversible dose-dependent inhibition of the CT-L activity of the 20S and 20Si, with more than 80% proteasome inhibition in most tissues upon ONX0912 doses 4- to 10-fold below the maximum tolerated dose.\textsuperscript{16} In the present studies, we first characterized the distribution of the i20S and c20S subunits in WM primary cells and in their normal cellular counterpart; primary cells express a significantly higher level of i20S subunits compared with c20S, and expression level of i20S and c20S components is significantly higher than in normal cells. We next evaluated for the first time the antitumor activity of ONX0912 in WM; it inhibits the CT-L activity of both i20S (LMP7) and c20S (β5), which are both significantly higher in WM cells compared with normal cells, and resulted in inhibition of proliferation and induction of cytotoxicity in WM cells by inhibiting cell-cycle progression and inducing apoptosis in a caspase-dependent and -independent manner, as evidenced by activation of c-Jun N-terminal kinase, inhibition of NF-κB, and initiation of the unfolded protein response. Unfolded protein response (UPR) represents an adaptive mechanism that cells adopt under physiologic conditions, which leads to accumulation and activation of misfolded proteins in the endoplasmic reticulum (ER), thus resulting in cell survival.\textsuperscript{31,32} In contrast, prolonged ER stress, as occurs with proteasome inhibition,\textsuperscript{31,32} may override the prosurvival mechanisms of the initiated UPR, leading to apoptosis. In the present study, we found that ONX0912 induced down-modulation of BiP and PDI, leading to reduced cell survival and induction of apoptosis in the treated cells; however, longer exposure could exert a protective effect in treated cells. This could potentially represent a mechanism of resistance to ONX0912 treatment; nevertheless, we observed that ONX0912 induced up-regulation of p-eIF2α even at a late time point. This could possibly represent a way of survival in these cells due to attenuation of p-eIF2α, as recently reported.\textsuperscript{33} It has been previously reported that bortezomib-induced proteasome inhibition results in up-regulation of AKT phosphorylation.\textsuperscript{10,21} Importantly, we found that ONX0912 inhibited bone marrow stromal cell–induced phosphorylation of AKT and ERK in WM cells, indicating that ONX0912 may trigger significant antitumor activity against WM cells, even in the presence of the BM milieu, by reducing the BM paracrine growth of WM cells. Because we observed that ONX0912 does not target p-Akt and p-ERK in WM cells in the absence of bone marrow stromal cells, we hypothesize that the efficacy of ONX0912 in reducing BMSC-induced up-regulation of Akt and ERK signaling cascades in WM cells may result from the inhibition of BMSC-derived cytokines such as IL-6 and IGF-1, which are known to be activators of both PI3K/Akt and MAPK/ERK signaling pathways.\textsuperscript{22} These preclinical findings demonstrate that ONX0912 targets WM cells, due to its anti-CT-L activity of both immunoproteasome and constitutive proteasome, providing the framework for testing this novel irreversible CT-L inhibitor in this disease. Acknowledgments We thank Ms Jennifer Stedman for reviewing the paper. This study was supported in part by R21 IR2ICA126119-01 and the International Waldenström Macroglobulinemia Foundation (IWMF). This work was supported by the Michelle and Steven Kirsch laboratory for Waldenström and the Heje fellowship for Waldenström. Authorship Contribution: A.M.R. and I.M.G. designed the research; A.M.R., K.C.A., and I.M.G. wrote the paper; A.M.R., A.S., M.A., H.T.N., F.A., A.K.A., P.Q., P.M., and J.R. performed research; and A.M.R., M.A., S.D., and I.M.G. analyzed the data. Conflict-of-interest disclosure: M.A. and S.D. are employed by Onyx Pharmaceuticals. K.C.A. is a member of the Speakers Bureau and has received honoraria and research funding from Millennium, Celgene, and Novartis. I.M.G. is a member of the Speakers Bureau and has received honoraria from Millennium, Celgene, and Novartis, is on the Ad Board for Celgene, and has received research funding from Millennium. A.M.R., A.S., H.T.N., A.K.A., F.A., P.Q., P.M., and J.R. declare no competing financial interests. Correspondence: Irene M. Ghobrial, Medical Oncology, Dana-Farber Cancer Institute, 44 Binney St, Mayer 548A, Boston, MA, 02115; e-mail: firstname.lastname@example.org. References 1. Ciechanover A. Intracellular protein degradation: from a vague idea thru the lysosome and the ubiquitin-proteasome system and onto human diseases and drug targeting. \textit{Hematol Am Soc Hematol Educ Program}. 2006;1:1-12. 2. Adams J. The proteasome: structure, function, and role in the cell. \textit{Cancer Treat Rev}. 2003;29(suppl 1):3-9. 3. Glynn H, Powis SH, Beck S et al. A proteasome-related gene between the two ABC transporter loci in the class II region of the human MHC. \textit{Nature}. 1991;353(6342):357-360. 4. Martinez CK, Monaco JJ. Homology of proteasome subunits to a major histocompatibility complex-linked LMP gene. \textit{Nature}. 1991;353(6345):664-667. 5. Nandi D, Jiang H, Monaco JJ. Identification of MECL-1 (LMP-10) as the third IFN-gamma-inducible proteasome subunit. \textit{J Immunol}. 1996;156(7):2361-2364. 6. Kissilev AF, Goldberg AL. Proteasome inhibitor: from research tools to drug candidates. \textit{Chem Biol}. 2001;8(8):739-758. 7. Richardson PG, Sonneveld P, Schuster MW et al. Bortezomib or high-dose dexamethasone for relapsed multiple myeloma. \textit{N Engl J Med}. 2005;352(24):2487-2498. 8. Parlati F, Lee SJ, Aujay M et al. Carfilzomib can induce tumor cell death through selective inhibition of the chymotrypsin-like activity of the proteasome. \textit{Blood}. 2009;114(16):3439-3447. 9. Moreau AS, Jia X, Ngo HT, et al. Protein kinase C inhibitor enzastaurin induces in vitro and in vivo antitumor activity in Waldenstrom macroglobulinemia. \textit{Blood}. 2007;109(11):4964-4972. 10. Roccoaro AM, Leleu X, Sacco A, et al. Dual targeting of the proteasome regulates survival and homing in Waldenstrom macroglobulinemia. \textit{Blood}. 2008;111(9):4752-4763. 11. Obeng EA, Carlson LM, Gutman DM, Harrington WJ Jr, Lee KP, Boise LH. Proteasome inhibitors induce a terminal unfolded protein response in multiple myeloma cells. \textit{Blood}. 2006;107(12):4907-4916. 12. Leleu X, Xu L, Jia X, et al. Endoplasmic reticulum stress is a target for therapy in Waldenstrom macroglobulinemia. \textit{Blood}. 2009;113(3):626-634. 13. Aberle H, Bauer A, Stapert J, Kispert A, Kemler R. beta-catenin is a target for the ubiquitin-proteasome pathway. \textit{EMBO J}. 1997;16(13):3797-3804. 14. Hideshima T, Chauhan D, Richardson P, et al. NF-κB as a therapeutic target in multiple myeloma. \textit{J Biol Chem}. 2002;277(19):16639-16647. 15. Monaco G, Evangelos A, Serafim K, et al. Canonical pathway of nuclear factor κB activation selectively regulates proinflammatory and prothrombotic responses in human atherosclerosis. \textit{Proc Natl Acad Sci U S A}. 2004;101(15):5634-5639. 16. Zhou HJ, Aujay MA, Bennett MK, et al. Design and synthesis of an orally bioavailable and selective peptide epoxyketone proteasome inhibitor (PR-047). \textit{J Med Chem}. 2009;52(9):3028-3038. 17. Chauhan D, Singh A, Brahmandam M, et al. Combination of proteasome inhibitors bortezomib and NPI-0052 trigger in vivo synergistic cytotoxicity in multiple myeloma. \textit{Blood}. 2008;111(3):1654-1664. 18. Mitsiades CS, Mitsiades NS, Munshi NC, Richardson PG, Anderson KC. The role of the bone microenvironment in the pathophysiology and therapeutic management of multiple myeloma: interplay of growth factors, their receptors and stromal interactions. *Eur J Cancer*. 2006; 42(11):1564–1573. 19. Uddin S, Hussain AR, Siraj AK, et al. Role of phosphatidylinositol 3'-kinase/AKT pathway in diffuse large B-cell lymphoma survival. *Blood*. 2006;108:4178–4186. 20. Leleu X, Jia X, Rummel J, et al. The Akt pathway regulates survival and homing in Waldenstrom macroglobulinemia. *Blood*. 2007;110(13):4417–4426. 21. Hideshima T, Catley L, Yasui H, et al. Perifosine, an oral bioactive novel alkylphospholipid, inhibits Akt and induces in vitro and in vivo cytotoxicity in human multiple myeloma cells. *Blood*. 2006; 107(10):4053-4062. 22. Muchamuel T, Basler M, Aujay MA, et al. A selective inhibitor of the immunoproteasome subunit LMP7 blocks cytokine production and attenuates progression of experimental arthritis. *Nat Med*. 2009;15(7):781-787. 23. Qiang YW, Kopantzev E, Rudikoff S, et al. Insulin like growth factor-1 signaling in multiple myeloma: downstream elements, functional correlates, and pathway cross-talk. *Blood*. 2002; 99(11):4138-4146. 24. Ghobrial IM, Gertz MA, Fonseca R. Waldenstrom macroglobulinaemia. *Lancet Oncol*. 2003;4(11): 679-685. 25. Owen RG, Treon SP, Al-Katib A, et al. Clinicopathological definition of Waldenström’s macroglobulinemia: consensus panel recommendations from the Second International Workshop on Waldenström’s Macroglobulinemia. *Semin Oncol*. 2003;30(2):110-115. 26. Adams J. The development of proteasome inhibitors as anticancer drugs. *Cancer Cell*. 2004;5(5): 417-421. 27. Hideshima T, Mitsiades C, Akiyama M, et al. Molecular mechanisms mediating antitumour activity of proteasome inhibitor PS-341. *Blood*. 2003;101(4):1530-1534. 28. Treon SP, Hunter ZR, Matous J, et al. Multicenter clinical trial of bortezomib in relapsed/refractory Waldenström’s macroglobulinemia: results of WMCTG trial 03-248. *Clin Cancer Res*. 2007; 13(11):3320-3325. 29. Kuhn DJ, Chen Q, Voorhees PM, et al. Potent activity of carfilzomib, a novel, irreversible inhibitor of the ubiquitin-proteasome pathway, against preclinical models of multiple myeloma. *Blood*. 2007;110(9):3281-3290. 30. S. Jagannath, R. Vij, K. Stewart, et al. Final results of PX-171-903-A0, part 1 of an open-label, single-arm, phase II study of carfilzomib (CFZ) in patients (pts) with relapsed and refractory multiple myeloma (MM) [abstract]. *J Clin Oncol*. 2009;27:15s. Abstract 8504. 31. Fribley A, Zeng Q, Wang CY. Proteasome inhibitor PS-341 induces apoptosis through induction of endoplasmic reticulum stress-reactive oxygen species in head and neck squamous cell carcinoma cells. *Mol Cell Biol*. 2004;24(22):9695-9704. 32. Obeng EA, Carlson LM, Gutman DM, Harrington WJ Jr, Lee KP, Boise LH. Proteasome inhibitors induce a terminal unfolded protein response in multiple myeloma cells. *Blood*. 2008;107(12): 4907-4916. 33. Schewe DN, Aguirre-Ghiso JA. Inhibition of eIF2alpha dephosphorylation maximizes bortezomib efficiency and eliminates quiescent multiple myeloma cells surviving proteasome inhibitor therapy. *Cancer Res*. 2009;69(4):1545-1552. Selective inhibition of chymotrypsin-like activity of the immunoproteasome and constitutive proteasome in Waldenström macroglobulinemia Aldo M. Roccaro, Antonio Sacco, Monette Aujay, Hai T. Ngo, Abdel Kareem Azab, Feda Azab, Phong Quang, Patricia Maiso, Judith Runnels, Kenneth C. Anderson, Susan Demo and Irene M. Ghobrial Updated information and services can be found at: http://www.bloodjournal.org/content/115/20/4051.full.html Articles on similar topics can be found in the following Blood collections Free Research Articles (4557 articles) Lymphoid Neoplasia (2566 articles) Information about reproducing this article in parts or in its entirety may be found online at: http://www.bloodjournal.org/site/misc/rights.xhtml#repub_requests Information about ordering reprints may be found online at: http://www.bloodjournal.org/site/misc/rights.xhtml#reprints Information about subscriptions and ASH membership may be found online at: http://www.bloodjournal.org/site/subscriptions/index.xhtml Blood (print ISSN 0006-4971, online ISSN 1528-0020), is published weekly by the American Society of Hematology, 2021 L St, NW, Suite 900, Washington DC 20036. Copyright 2011 by The American Society of Hematology; all rights reserved.
The Board of Directors (the “Board”) of Montgomery County Municipal Utility District No. 89 (the “District”) met in regular session, open to the public, on the 3rd day of July, 2019, at Allen Boone Humphries Robinson LLP, 3200 Southwest Freeway, Houston, Texas 77027, outside the boundaries of the District, and the roll was called of the duly appointed members of the Board, to-wit: | Name | Position | |-----------------------|---------------------------------| | Paul Cote | President | | Robert Veasey, III | Vice President | | Bredawn Riley | Secretary | | Shawn Goodman | Assistant Vice President | | Vacant | Assistant Secretary | and all of the above were present, thus constituting a quorum. Also present at the meeting were John Flippo of Cross Development; Shara Cote, resident of the District; Justin Abshire of Jones & Carter, Inc. (“J&C”); Erin Garcia of Myrtle Cruz, Inc. (“Myrtle Cruz”); and Katie Sherborne and Holly Huston of Allen Boone Humphries Robinson LLP (“ABHR”). **APPOINTMENT OF NEW DIRECTOR** The Board considered appointing a new director. After discussion, Director Goodman moved to appoint Ben Slotnick to fill the vacancy on the Board. Director Riley seconded the motion, which carried by unanimous vote. **APPROVE SWEORN STATEMENT, OFFICIAL BOND, AND OATH OF OFFICE** The Board next considered approving the Sworn Statement, Oath of Office, and Official Bond for Benjamin Slotnick. After discussion, Director Goodman moved to approve the Sworn Statement, Official Bond, and Oath of Office for Director Slotnick. Director Riley seconded the motion, which was approved by unanimous vote. **REORGANIZE THE BOARD AND AUTHORIZE EXECUTION OF DISTRICT REGISTRATION FORM** The Board reviewed the current Board of Directors organization and officers. Following discussion, Director Goodman moved to approve the officers’ positions as follows: Director Riley seconded the motion, which carried unanimously. **AUTHORIZE FILING OF DISTRICT REGISTRATION FORM WITH TEXAS COMMISSION ON ENVIRONMENTAL QUALITY ("TCEQ")** The Board then considered authorizing execution of a District Registration Form. Ms. Sherborne stated that a revised District Registration Form must be executed and submitted to the TCEQ identifying the new director’s term and office. After review and discussion, Director Goodman moved to authorize execution of the District Registration Form and direct that the Registration Form be filed appropriately and retained in the District’s official records. Director Riley seconded the motion, which carried unanimously. **DISCUSS OPEN MEETINGS ACT AND PUBLIC INFORMATION ACT TRAINING REQUIREMENTS** Ms. Sherborne reviewed a memorandum from ABHR regarding Open Meetings Act training requirements, a copy of which is attached. Ms. Sherborne said that the Texas Legislature requires each elected or appointed public official to complete a course of training regarding the responsibilities of the governmental body and its members under the Texas Open Meetings Act within ninety days of receiving the appointment. **CONFLICT OF INTEREST DISCLOSURE REQUIRED UNDER CHAPTER 176 OF THE TEXAS LOCAL GOVERNMENT CODE AND LIST OF LOCAL GOVERNMENT OFFICERS** Ms. Sherborne next reviewed Chapter 176 of the Texas Local Government Code, which requires directors and consultants to disclose certain conflicts of interest. She reviewed the forms adopted by the Texas Ethics Commission for making disclosures under Chapter 176 and noted that the forms are required to be filed with the records administrator for the District within seven days of a disclosable conflict arising. Ms. Sherborne asked the Board members to contact ABHR if assistance is needed in determining whether a conflict requires disclosure or in making a required disclosure. A copy of the Conflict of Interest Disclosure memorandum is attached. Ms. Sherborne stated that pursuant to Chapter 176 of the Texas Local Government Code, the District maintains a List of Local Government Officers. She next reviewed the List of Local Government Officers and noted that Director Slotnick has been added to the list. After review and discussion, Director Goodman moved to approve and authorize execution of the List of Local Government Officers and direct that the List be filed appropriately and retained in the District’s official records. Director Riley seconded the motion, which passed by unanimous vote. MINUTES The Board reviewed the minutes of the June 6, 2019, regular meeting. Following review and discussion, Director Goodman moved to approve the minutes, as written. The motion was seconded by Director Veasey and passed by unanimous vote. DISTRICT’S INSURANCE POLICIES This item was deferred to the next meeting. ARBITRAGE REBATE REPORT FOR THE SERIES 2009 BONDS The Board reviewed a report from OmniCap, LLC, concluding that there were no excess earnings in the District’s Series 2009 Bonds and that no rebate for cumulative yield restriction liability is due to the Internal Revenue Service at the computation date for the Series 2009 bond series. LEGENDS RANCH PROPERTY OWNERS ASSOCIATION (“POA”) Mr. Abshire updated the Board regarding the POA matters. DISCUSS POSSIBLE AMENDMENT TO AGREEMENT WITH SPRING CREEK UTILITY DISTRICT (“SCUD”) FOR DRAINAGE MAINTENANCE This item was deferred. ENGAGE AUDITOR TO CONDUCT AUDIT FOR FISCAL YEAR END AUGUST 31, 2019 The Board discussed engaging Breedlove & Co., P.C. (“Breedlove”) to conduct the District’s audit for the fiscal year ending August 31, 2019. Director Cote requested clarification on which of the District’s consultants confirms that the District’s operator is properly collecting the resident’s water bills. Ms. Garcia stated that Myrtle Cruz reviews every invoice received from the District’s operator and confirms the District’s accounts are properly accounted for. Director Cote requested J&C to prepare a report regarding what water actually costs in the District so that the Board may determine if the District’s rates should be adjusted. After discussion, Director Goodman moved to engage Breedlove to conduct the District’s audit for the fiscal year ending August 31, 2019. Director Riley seconded the motion, which carried by unanimous vote. Ms. Garcia presented the bookkeeper’s report, including information on the tax account, and submitted the bills for the Board’s review. Copies of the bookkeeper’s report and tax account report are attached. Director Goodman requested clarification on the water and sewer revenue received in the District’s operating fund. Ms. Garcia gave an overview of the District’s operating fund and relayed back to the Board’s earlier discussion on how Myrtle Cruz reviews the invoices from the District’s operator to confirm the amount of revenue received from the commercial and residential connections within the District. Discussion ensued regarding check no. 6354 payable to Accurate Utility Supply (“Accurate”) in the amount of $130,275.00. The Board next discussed the Notice of Amended Rate Order for GRP Participants from the San Jacinto River Authority (“SJRA”), noting the pumpage and import fee will increase from the current rate of $2.64/1000 gallons to $3.15/1000 gallons. Director Cote requested Board authorization to coordinate with Directors of surrounding municipal utility districts (“MUDs”) to schedule a joint meeting in September or October to discuss the SJRA and the pumpage fee increases. Following review and discussion, Director Goodman moved to (1) approve the bookkeeper’s report; (2) authorize Myrtle Cruz to hold check no. 6354 payable to Accurate until completion of the Smart Meter installation is confirmed by the District’s operator; and (3) authorize Director Cote to coordinate with Directors of surrounding MUDs to schedule a joint meeting in September or October to discuss the SJRA and the pumpage fee increases. Director Veasey seconded the motion, which carried by unanimous vote. **BUDGET FOR THE FISCAL YEAR END AUGUST 31, 2020** Ms. Garcia reviewed a draft budget for the District’s August 31, 2020, fiscal year end and a joint facility budget for fiscal year ending August 31, 2020, copies of which are included in the bookkeeper’s report. The Board concurred to defer action on this agenda item. **DISCUSS ASSOCIATION OF WATER BOARD DIRECTORS SUMMER CONFERENCE AND APPROVE REIMBURSEMENT OF ELIGIBLE EXPENSES, AND AUTHORIZE ATTENDANCE AT THE ASSOCIATION OF WATER BOARD DIRECTORS WINTER CONFERENCE** The Board discussed the Association of Water Board Directors (“AWBD”) summer conference. After discussion, Director Goodman moved to approve reimbursement of eligible expenses from the summer conference and authorize attendance at the AWBD winter conference in Dallas, Texas. Director Veasey seconded the motion, which carried by unanimous vote. **REPORT ON DRAINAGE CHANNEL MAINTENANCE** There was no discussion on this agenda item. **ENGINEER’S REPORT** Mr. Abshire presented and reviewed with the Board the engineer’s report, a copy of which is attached. After review and discussion, Director Veasey moved to approve the engineer’s report. Director Goodman seconded the motion, which passed unanimously. **POTENTIAL DEVELOPMENT OF 3.09-ACRE TRACT** Mr. Abshire presented and reviewed the feasibility study for the Sanitary and Storm Sewer for Public Dedication of the 3.09-acre tract, a copy of which is attached. He reported that the study concluded that if the District chooses to reimburse for the sanitary and storm sewer facilities, the estimated preliminary cost, which includes engineering fees and contingencies is $190,000.00. Director Cote expressed the District residents’ desire to have a secondary access point to the property, a retaining wall installed along the part of the property that backs up to residential homes, for the development to conform with neighborhood standards, for the parcel containing the marquee sign to be conveyed to the POA, and for the developer to coordinate a special meeting with the District and the POA to discuss the aforementioned requests. Mr. Flippo reported he will reach out to Lance Malmgren of Wild Rose Farm, LLC, to discuss the Board’s requests. The Board concurred to take no action at this time. **LONG TERM PLAN** Mr. Abshire presented and reviewed a copy of the current five-year Capital Improvements plan, a copy of which was included in his report. **STORM WATER PERMITTING MATTERS** Mr. Abshire presented the proposed Notice of Intent (“NOI”) and Storm Water Management Program (“SWMP”) prepared for the District in accordance with the requirements set forth in the TPDES General Permit No. TXR040000. He discussed the goals and responsibilities identified in the SWMP for future implementation during the five-year permit term. Following review and discussion, Director Veasey moved to approve the NOI and SWMP, authorize submittal to the TCEQ, and direct that the NOI and SWMP be filed appropriately and retained in the District’s official records. Director Goodman seconded the motion, which was approved by unanimous vote. Mr. Abshire stated J&C is researching alternative funding sources for sidewalk extensions along Birnham Woods Drive. He reported there is no update at this time. **OPERATOR’S REPORT** The Board reviewed a copy of the monthly operator’s report for the month of June 2019, a copy of which is attached. He stated that the ratio of water billed versus produced for the period from May 10, 2019 to June 10, 2019, was 99%. The Board then reviewed two accounts requested to be written off as uncollectable. Following discussion, Director Goodman moved to approve the operator’s report, including the two account to be written off as uncollectible. The motion was seconded by Director Riley and carried unanimously. **CENTRAL DETENTION POND STORM WATER PUMP STATION REHABILITATION** Mr. Abshire updated the Board regarding the Central Detention Pond storm water pump station rehabilitation. Discussion ensued regarding scheduling a special meeting during the week of August 6th with SCUD and Montgomery County Municipal Utility District No. 88 (“MUD 88”) to discuss potential repairs, maintenance, and possible improvements to the pump station. Director Cote requested Board authorization to attend the upcoming SCUD and MUD 88 Board meetings to gather information from them regarding the status of the pump station rehabilitation. Following discussion, the Board concurred to (1) authorize ABHR to coordinate a special meeting between SCUD and MUD 88 during the week of August 6, 2019 at the offices of Municipal Operations & Consulting, Inc., 27316 Spectrum Way, Oak Ridge North, Texas 77385; and (2) authorize Director Cote to attend the upcoming Board meetings for SCUD and MUD 88. **AUTOMATED SMART WATER METER REPLACEMENT PROGRAM** There was no discussion on this agenda item. **TERMINATION OF WATER SERVICE** Mr. Montgomery presented a list of delinquent customers and reported the residents on the termination list were delinquent in payment of their water and sewer bills and were given written notification, in accordance with the District’s Rate Order, prior to the meeting of the opportunity to appear before the Board of Directors to explain, contest, or correct their bills and to show why utility services should not be terminated for reason of non-payment. Following review and discussion, Director Goodman moved to authorize termination of delinquent accounts in accordance with the District’s Rate Order and direct that the delinquent customer list be filed appropriately and retained in the District’s official records. The motion was seconded by Director Riley and passed by unanimous vote. DISCUSSION OF NEXT TOWN HALL MEETING The Board discussed the upcoming special meeting within the District on October 10, 2019, at 6:30 p.m., which will be open to the community to address District matters and water smart matters. CONVENE IN EXECUTIVE SESSION PURSUANT TO SECTION 551.087, TEXAS GOVERNMENT CODE, TO DISCUSS OR DELIBERATE REGARDING THE OFFER OF A FINANCIAL OR OTHER INCENTIVE TO A BUSINESS PROSPECT The Board did not convene in this executive session. There being no further business to come before the Board, the Board meeting was adjourned. (SEAL) Asst. Secretary, Board of Directors | Attachment | Page | |---------------------------------------------------------------------------|------| | Memorandum regarding Open Meetings Act training | 2 | | Conflict of Interest Disclosure | 2 | | Bookkeeper’s report and tax account report | 4 | | Engineer’s report | 5 | | Feasibility Study 3.09-Acre Tract | 5 | | Monthly operator’s report | 6 |
Efficient Thermal Error Models of Machine Tools A thesis submitted to attain the degree of DOCTOR OF SCIENCES of ETH ZURICH (Dr. sc. ETH Zurich) presented by PABLO HERNÁNDEZ-BECERRO MSc. ME ETH Zurich born on 15.08.1991 citizen of Spain accepted on the recommendation of Prof. Dr. Konrad Wegener, examiner Prof. Dr. Luis Norberto López de Lacalle, co-examiner 2020 To all my family. Abstract Thermal errors are one of the largest contributors to the geometrical errors of manufactured parts. Thermo-mechanical models predict the thermal behavior of machine tools and the associated mechanical displacements. Physical models can be created at early stages of the design phase when the physical system is still not available. They provide great flexibility to test the feasibility of design modifications and are a useful tool to optimize the performance of the system. However, one disadvantage is their high computational expense, linked to the evaluation of the discretized partial differential equations. Therefore, this work focuses on developing computationally efficient models that describe the thermo-mechanical behavior of machine tools. The proposed surrogate models reduce the computational effort while maintaining the accuracy of the prediction. These efficient modeling approaches enable applications requiring a large amount of model evaluations, as well as the possibility of real time predictions. This work uses projection-based model order reduction (MOR) approaches to create efficient, surrogate thermo-mechanical models of machine tools. This work develops a reduction method, Krylov Modal Subspace (KMS), which takes advantage of the behavior of thermal models of machine tools for the creation of the reduction basis. The KMS reduction basis exploits the fact that the thermal response of machine tools decays at high excitation frequencies. The reduced system captures the most relevant features of the original, high-fidelity model output. However, there is an error associated to the reduction. Thus, this thesis proposes an a-priori error estimator to quantify the magnitude of the reduction error in the frequency range of interest. The surrogate models by KMS method reproduce accurately the temperature response in the frequency range of interest. However, the output of interest is the thermally induced displacements between the tool center point (TCP) and the workpiece. Therefore, this thesis presents an efficient coupling approach between the thermal reduced states and a dedicated mechanical reduced system. There are multiple physical parameters that describe the thermo-mechanical models of machine tools. Some of these parameters might change over time due to different operation conditions. This thesis concentrates on two parametric dependencies that are most relevant for thermal error models of machine tools, namely the position dependency and the varying convective boundary conditions. Machine tools fulfill the design requirements through the relative movements of several parts. The reduced models need to provide the thermal and mechanical response of the system at any relative position between the machine tool components. This work introduces a method enabling the modification of the thermal contact area after the reduction. The method approximates the contact area as a sum of a finite number of harmonic functions, a truncated Fourier series. The main advantage of the trigonometric approximation is that it enables the continuous traceability of the position dependency without considering of the nodes of the finite element (FE) discretization. This work focuses on another parametric dependency of the reduced models, the variation of the heat transfer coefficient (HTC) after reduction. The HTC describes the convective heat exchange between the structure and the fluid media, such as the air surrounding the structure of the machine tool. Due to modifications of the conditions of the fluid flow, the HTC varies over time. This thesis proposes a reduction method that enables the variation of the parameter of the HTC after reduction. The developed MOR uses the concept of system bilinearization, adapting it to the KMS reduction approach. The reduced model accurately approximates the original model for any value of the HTC. This work also proposes a second reduction method for varying convective boundary conditions. The method creates several reduced systems, each of them valid for a specific value of the HTC. The main advantage of this method is that it enables the interpolation between the local system directly in the reduced subspace. This second reduction approach is suitable for applications that require the transition between two discrete values of the HTC, such as a switch between natural to forced convection. A dedicated simulation platform, MORe, provides an efficient implementation of reduction methods. The design of the software platform MORe facilitates the development of physical models of machine tools with a straight-forward workflow. The software offers dedicated analysis tools and cutting-edge visualization in order to investigate and optimize the thermal behavior of machine tools. The developed methods and software implementation are tested with two study cases of thermo-mechanical models of machine tools. The research therefore also extends the knowledge of thermal errors in machine tools, contributing to an efficient design process and optimization of the thermal error compensation strategies. Zusammenfassung Das thermische Verhalten von Werkzeugmaschinen ist die wichtigste Ursache für geometrische Fehler am gefertigten Werkstück. Thermo-mechanische Modelle sagen das thermische Verhalten von Werkzeugmaschinen und die damit verbundenen mechanischen Abweichungen voraus. Physikalische Modelle können in einem frühen Stadium der Entwurfsphase erstellt werden, wenn das physikalische System noch nicht verfügbar ist. Sie bieten eine große Flexibilität, um die Machbarkeit von Konstruktionsänderungen zu testen und sind ein nützliches Werkzeug, um die Leistung des Systems zu optimieren. Ein Nachteil ist jedoch der hohe Rechenaufwand, der mit der Auswertung der diskretisierten partiellen Differentialgleichungen verbunden ist. Daher konzentriert sich diese Arbeit auf die Entwicklung rechentechnisch effizienter Modelle, die das thermo-mechanische Verhalten von Werkzeugmaschinen beschreiben. Die vorgeschlagenen Ersatzmodelle reduzieren den Rechenaufwand und erhalten gleichzeitig die Genauigkeit der Vorhersage. Diese effizienten Modellierungsansätze ermöglichen Anwendungen, die eine große Anzahl von Modellauswertungen erfordern, sowie Echtzeitvorhersagen. Diese Arbeit verwendet Ansätze zur Modellordnungsreduktion (MOR), um effiziente thermo-mechanische Modelle von Werkzeugmaschinen zu erstellen. Eine Reduktionsmethode, Krylov Modal Subspace (KMS), wird entwickelt, die das Verhalten von thermischen Modellen von Werkzeugmaschinen zur Schaffung der Reduktionsbasis nutzt. Die KMS-Reduktionsbasis nutzt die Tatsache, dass das thermische Antwortverhalten der Modelle bei genügend hohen Anregungsfrequenzen mit der Anregungsfrequenz abklingt. Das reduzierte System reproduziert die wesentlichen Verhaltensmerkmale des originalen unreduzierten Systems. Allerdings ist mit der Reduktion ein Fehler verbunden. Deshalb wird in dieser Arbeit ein a priori Fehlerschätzer zur Quantifizierung der Größe des Reduktionsfehlers im betrachteten Frequenzbereich entwickelt. Die interessanten Simulationsergebnisse sind die thermisch induzierten relativen Verlagerungen zwischen dem Werkzeugbezugspunkt (engl. tool center point, TCP) und dem Werkstück. Daher entwickelt diese Arbeit einen effizienten Kopplungsansatz zwischen den reduzierten, thermischen Zuständen und dem zugeordneten mechanischen System. Eine Vielzahl physikalischer Parameter wird zur Beschreibung thermo-mechanischer Modelle von Werkzeugmaschinen benötigt. Einige dieser Parameter können sich zeitlich aufgrund unterschiedlicher Betriebsbedingungen ändern. Diese Arbeit konzentriert sich auf zwei parametrische Abhängigkeiten, die für thermische Fehlermodelle von Werkzeugmaschinen am relevantesten sind, nämlich die Positionsabhängigkeit und die unterschiedlichen konvektiven Randbedingungen. Werkzeugmaschinen erfüllen ihre Aufgabe durch die Relativbewegungen mehrerer Achsen. Die reduzierten Modelle müssen das thermische und mechanische Verhalten des Systems an jeder relativen Position zwischen den Komponenten bereitstellen. Diese Arbeit stellt ein Verfahren vor, das die Änderung der thermischen Kontaktfläche nach der Reduktion ermöglicht. Das Verfahren nähert die thermische Kontaktfläche mit einer endlichen Anzahl von harmonischen Funktionen, einer abgebrochenen Fourierreihe. Die trigonometrische Approximation der Kontaktzone ermöglicht die kontinuierliche Rückverfolgbarkeit der relativen Positionen zwischen den verschiedenen Teilen ohne Berücksichtigung der Knoten der Finite Elementen (engl. finite element, FE)-Diskretisierung. Diese Arbeit konzentriert sich auf eine weitere parametrische Abhängigkeit der reduzierten Modelle, die Variation des Wärmeübertragungskoeffizient (engl. heat transfer coefficient, HTC) nach der Reduktion. Der HTC beschreibt den konvektiven Wärmeaustausch zwischen der Struktur und den Fluiden. Aufgrund von Änderungen in der Strömung des Fluids variiert der HTC zeitlich. Diese Arbeit schlägt eine Reduktionsmethode vor, die die Variation des HTC nach der Reduktion ermöglicht. Die entwickelte MOR nutzt das Konzept der Systembilinearisierung und passt es an den KMS-Reduktionsansatz an. Das so reduzierte Modell approximiert das Originalmodell für jeden HTC. Diese Arbeit schlägt auch eine zweite Reduktionsmethode für unterschiedliche konvektive Randbedingungen vor. Das Verfahren erzeugt mehrere reduzierte Systeme, von denen jedes für einen bestimmten Wert des HTC gültig ist. Der Hauptvorteil dieses Verfahrens besteht darin, dass es die Interpolation zwischen den einzelnen reduzierten Systemen direkt im Unterraum der Ordnungsreduktion möglich ist. Dieser zweite Reduzierungsansatz eignet sich für Anwendungen, die den Übergang zwischen zwei diskreten Werten des HTC erfordern, wie beispielsweise einen Wechsel zwischen natürlicher und erzwungener Konvektion. Eine spezielle Simulationsplattform, MORE, ermöglicht die effiziente Implementierung von Reduktionsmethoden. Das Design der Softwareplattform MORE ermöglicht die Entwicklung physikalischer Modelle von Werkzeugmaschinen mit einem unkomplizierten Workflow. Die Software bietet spezielle Analysetools und modernste Visualisierung, um das thermische Verhalten von Werkzeugmaschinen zu untersuchen und zu optimieren. Die entwickelten Methoden und die Software-implementierung werden mit zwei Studienfällen von thermo-mechanischen Modellen von Werkzeugmaschinen dargestellt. Die Forschung erweitert daher auch das Wissen über thermische Fehler an Werkzeugmaschinen und hilft zur Optimierung des Maschinendesigns hinsichtlich des thermischen Verhaltens. Daraüber hinaus ermöglicht die vorgestellte Ordnungsreduktion die Nutzung der Modelle zur Fehlerkompensation auf der Maschine. Resumen Los errores térmicos son uno de los factores que más afecta a los errores geométricos en fabricación. Los modelos termo-mecánicos predicen el comportamiento térmico de la máquina herramienta y las desviaciones mecánicas asociadas con la variación de la temperatura. Los modelos físicos se pueden crear en las primeras etapas de la fase de diseño cuando el sistema físico aún no está disponible. Proporcionan una gran flexibilidad para probar distintas posibilidades de diseño y son una herramienta útil para optimizar la precisión de la máquina herramienta. Sin embargo, una desventaja es su elevado coste computacional, vinculado a la evaluación numérica de las ecuaciones diferenciales parciales. Por lo tanto, este trabajo se centra en el desarrollo de modelos computacionalmente eficientes que describan el comportamiento termo-mecánico de la máquina herramienta. Los modelos propuestos en este trabajo reducen el esfuerzo computacional y mantienen la precisión de la predicción. Estos eficientes enfoques de modelado permiten aplicaciones que requieren evaluar los modelos de forma iterativa, así como la posibilidad de predicciones en tiempo real. En esta tesis se utilizan modelos eficientes basados en la proyección de las ecuaciones del sistema en un subespacio de menor dimensión. Se desarrolla un método de reducción, Subespacio de Krylov y Modal (inglés Krylov Modal Subspace, KMS), que aprovecha el comportamiento de los modelos térmicos de las máquinas herramientas para la creación de la base de reducción. La base de reducción KMS explota el hecho de que la respuesta térmica de los modelos decae a frecuencias de excitación más altas. El sistema reducido captura las características más relevantes del modelo original. Sin embargo, hay un error asociado a la reducción. Por lo tanto, esta tesis propone una forma de estimar el error a-priori para cuantificar la magnitud del error de reducción en el rango de frecuencias de interés. Los modelos reducidos con el método KMS reproducen con precisión la respuesta de la temperatura en el rango de frecuencias de interés. No obstante, la magnitud de interés son las desviaciones inducidas térmicamente entre la herramienta (inglés tool center point, TCP) y la pieza. Por lo tanto, esta tesis relaciona de forma eficiente los estados térmicos reducidos y el sistema mecánico reducido. Los modelos termo-mecánicos máquina herramienta se describen mediante una gran cantidad de parámetros físicos. Algunos de estos parámetros pueden cambiar con el tiempo debido a las diferentes condiciones de operación. Esta tesis se concentra en dos dependencias paramétricas que son relevantes para los modelos de error térmico de las máquinas herramienta, a saber, la dependencia de la respuesta témica con posición de los ejes de la máquina herramienta y la variación de condiciones de contorno de convección. Los procesos de fabricación requieren el movimiento de los distintos ejes para realizar su función. Por lo tanto, los modelos reducidos necesitan proporcionar la respuesta térmica y mecánica del sistema en cualquier posición relativa entre los distintos ejes. Este trabajo introduce un método que permite la modificación del área de contacto térmico después de la reducción. El método que se propone en esta tesis aproxima al área de contacto como la suma de un número finito de funciones armónicas. La ventaja de la aproximación trigonométrica de la zona de contacto es que permite modificar la posición relativa entre los ejes sin tener en cuenta los nodos de la discretización de elementos finitos (inglés finite element, FE). Este trabajo se centra en otra dependencia paramétrica de los modelos reducidos, la variación del coeficiente de transferencia de calor (inglés heat transfer coefficient, HTC). El HTC describe el intercambio de calor entre la estructura y el medio fluido por medio de convección. Debido a las modificaciones de las condiciones del flujo del fluido, el HTC varía con el tiempo. Esta tesis propone un método de reducción que permite la variación del parámetro del HTC después de la reducción. La técnica de reducción desarrollada utiliza el concepto de bilinearización del sistema, adaptándolo al enfoque de reducción del KMS. El modelo reducido se aproxima con precisión al modelo original para cualquier valor del HTC. En esta tesis se propone un segundo método de reducción para las condiciones de contorno convectivo variables. El método crea varios sistemas reducidos, cada uno de ellos válido para un valor específico del HTC. La principal ventaja de este método es que permite la interpolación entre el sistema local directamente en el subespacio reducido. Este segundo método de reducción es adecuado para aplicaciones que requieren la transición entre dos valores discretos del HTC, como el cambio entre convección natural a convección forzada. En esta tesis se crea una plataforma de simulación, MORe, que integra las técnicas de reducción. El diseño de la plataforma de software MORe facilita el desarrollo de modelos físicos de máquina herramienta. El software ofrece análisis especialmente diseñados para evaluar el comportamiento de la máquina herramienta y la posibilidad de visualizar los resultados. Los métodos desarrollados y la implementación del software se utilizan para desarrollar dos modelos termo-mecánicos de máquinas herramientas de 5 ejes. Por lo tanto, la investigación también amplía el conocimiento de los errores térmicos en las máquinas herramientas, contribuyendo a la mejora del diseño térmico y a la optimización de los modelos de compensación de los errores térmicos. Acknowledgement I would like to thank to the many people that contributed to this thesis and to my time at the Institute of Machine Tool and Manufacturing (IWF) and inspire AG. This thesis would not have been possible without the generous support and trust of Prof. Dr. Konrad Wegener, head of IWF and supervisor of this thesis. I really appreciate the opportunity to start a PhD at his institute and all inspiring discussions we had over the course of my doctoral studies. I would like to express my gratitude to Prof. Dr. Norberto López de Lacalle, co-examiner of this thesis, for co-supervising this work. I would like to thank Dr. Josef Mayr, group leader of the thermal group, for introducing me into the topic of thermal errors in machine tools and the fruitful discussions over the last years. Furthermore, I really appreciate collaboration with Dr. Sascha Weikert and Lukas Weiss during the different industrial projects. I would like to express a especial gratitude to Dr. Daniel Spescha. His guidance, support, and inspiring discussions helped shaping this thesis. The outcome of this thesis was only possible with the support of my colleagues in the thermal group: Philip Blaser, Nico Zimmermann, Dr. Florentina Pavliček, and Dr. Simon Züst. I would like to thank the MORe development team: Nino Ceresa, and Joel Purtchert. The many discussions during the development of MORe introduced me to the field of software development. I really appreciate the collaboration and good moments with all my office colleagues and fellow members of IWF and inspire AG. I would like to thank my family for their generous, unconditional support during my time in Zurich. And finally, I would like to thank Ari, for all the support in our Swiss adventure. ## Contents **List of abbreviations** .......................................................... xi **List of symbols** .................................................................. xiii 1 **Introduction** ........................................................................ 1 2 **State of the Art** .................................................................... 5 2.1 Thermal issues in machine tools ........................................... 5 2.2 Physical thermo-mechanical models in machine tools ............. 6 2.3 Thermal error compensation ................................................ 11 2.4 Model Order Reduction ....................................................... 14 2.4.1 Non parametric MOR .................................................. 14 2.4.2 Error estimation ......................................................... 18 2.4.3 Parametric MOR ....................................................... 19 2.5 Application of MOR for mechatronic systems ....................... 20 2.6 Discussion of the State of the Art ........................................ 21 2.7 Research Gap .................................................................... 24 2.8 Outline of the thesis ........................................................... 24 3 **Model order reduction of thermo-mechanical models** .............. 27 3.1 FEM discretization of the heat transfer equation .................... 27 3.2 Krylov and modal subspace reduction of thermal models ......... 29 3.3 Error estimation ............................................................... 33 3.4 Efficient coupling of the thermal and mechanical model .......... 43 4 **Model order reduction with varying boundary conditions** .......... 49 4.1 Definition of interfaces for thermal systems .......................... 49 4.2 Moving boundary conditions .............................................. 51 4.3 Varying convective boundary conditions .............................. 59 4.3.1 Definition of interfaces for thermal systems .................. 59 4.3.2 Parametric reduction with a global reduction basis: bilinearization .......................... 61 4.3.3 Parametric reduction with a local reduction basis: switching boundary conditions ........ 68 5 Software implementation ........................................................................................................... 73 5.1 Efficient model setup ............................................................................................................ 73 5.2 Thermo-mechanical analyses ............................................................................................... 76 5.3 Numerical implementation ..................................................................................................... 76 6 Application ................................................................................................................................. 79 6.1 Thermal error model: environmental temperature fluctuations ........................................ 79 6.1.1 Description of the thermo-mechanical model .............................................................. 79 6.1.2 Validation of the thermo-mechanical model ............................................................... 82 6.1.3 Evaluation of the thermo-mechanical response to environmental influences .......... 92 6.2 Thermal error model: internal heat sources .......................................................................... 98 6.2.1 Description of the thermo-mechanical model .............................................................. 99 6.2.2 Validation of the thermo-mechanical model ............................................................... 102 6.2.3 Evaluation of the thermo-mechanical response to internal heat sources ............... 109 6.3 Thermal error model: cutting fluid ....................................................................................... 111 6.3.1 Description of the thermo-mechanical model .............................................................. 111 6.3.2 Validation of the thermo-mechanical model ............................................................... 112 7 Conclusions and outlook ........................................................................................................... 115 Bibliography .................................................................................................................................. 119 A Implementation of the KMS reduction .................................................................................. 129 B Implementation of the Finite Element Method ...................................................................... 133 B.1 Numerical integration of surface elements ....................................................................... 133 B.2 Thermal solid elements .................................................................................................... 135 C Additional information of the thermo-mechanical models .................................................. 139 List of publications ....................................................................................................................... 143 List of supervised theses ............................................................................................................. 145 List of abbreviations | Abbreviation | Description | |--------------|-------------| | AG | air gap | | AI | artificial intelligence | | APDL | ANSYS Parametric Design Language | | API | application programming interface | | BIRKA | bilinear iterative rational Krylov algorithm | | BT | balanced truncation | | CAE | computer aided engineering | | CFD | computational fluid dynamics | | DOF | degrees of freedom | | EC | electrical cabinet | | FDM | finite differences method | | FDEM | finite difference element method | | FE | finite element | | FEM | finite element method | | GMRES | generalized minimal residual | | GSA | global sensitivity analysis | | HTC | heat transfer coefficient | | ILU | incomplete LU decomposition | | IRKA | iterative rational Krylov algorithm | | Abbreviation | Description | |--------------|-------------| | KMS | Krylov Modal Subspace | | LTI | linear time invariant | | LTV | linear time variant | | MIMO | multiple input multiple output | | MISO | multiple input single output | | MM | moment matchig | | MOR | model order reduction | | MR | machine room | | NC | numerical control | | ODE | ordinary differential equations | | PDE | partial differential equations | | POD | proper orthogonal decomposition | | PWM | pulse width modulation | | RMSE | root mean square error | | RMT | reconfigurable machine tool | | SISO | single input single output | | SLS | switched linear system | | SVD | singular value decomposition | | TCC | thermal contact conductivity | | TCM | thermal compliance matrix | | TCP | tool center point | | UI | user interface | ## List of symbols ### Notation | Symbol | Description | |--------|-------------| | $M$ | matrix is denoted with capital bold letter | | $m$ | vector is denoted with lower case bold letter | | $m$ | scalar | | $\|\cdot\|_2$ | Euclidean norm | | $\hat{()}$ | reduced vector or matrix | | $\bar{()}$ | average value | | $\|\cdot\|_F$ | Frobenious norm | | $[.,.]$ | closed interval | | $(.,.)$ | open interval | | Symbol | Description | |--------|-------------| | $\dim(\cdot)$ | dimension of a linear space | | $\text{range}(\cdot)$ | range of a linear transformation | | $\text{span}(\cdot)$ | vector span of a linear space | | $\text{tr}(\cdot)$ | trace of a matrix | | $\cap$ | intersection of two linear spaces | | $\circ$ | composite of two linear maps | | $\max(\cdot)$ | maximum value of an vector | | $Var(\cdot)$ | variance of a random variable | | $E(\cdot)$ | expected value of a random variable | | $\oplus$ | direct sum of two linear spaces | | Symbol | Description | |--------|-------------| | $A$ | system matrix of the original system | | $\tilde{A}$ | system matrix of the reduced system | | $\alpha_k$ | weight of the harmonics | | $A$ | linear transformation | | $B$ | input matrix of the original system | | $\tilde{B}$ | input matrix of the reduced system | | $B_e$ | spatial derivative of the ansatz function of the element | | $b$ | nodal values of the FE-mesh | | $b_n$ | nodal values of the FE-mesh normalized by the area | | $b_k$ | weight of the harmonics | | $C$ | output matrix of the original system | | $\tilde{C}$ | output matrix of the reduced system | | $C^e_{th}$ | thermal capacity matrix of the element | | $C_{th}$ | thermal capacity matrix of the FE-mesh | | $c_p$ | specific heat capacity | | $D$ | elasticity matrix | | $D_i$ | system matrix associated to the distributed interface $i$ | | $d$ | mean diameter of the bearing | | $d_0$ | diameter of the rolling elements | | $D_i$ | subspace spanned by $D_i$ | | $E$ | mass matrix of the original system | | $E(j\omega)$ | transfer function of the error between original and reduced system | | $\tilde{E}$ | mass matrix of the reduced system | | $e_{KMS}(j\omega)$ | error of the KMS reduction | | $E$ | Young modulus | | $E^2_{RMSE}$ | root mean square error | | $F_{th}$ | thermal force matrix of the FE-mesh | | $F_{ext}$ | external mechanical force matrix of the FE-mesh | | $f$ | external force vector | | $f^e_{th}$ | thermal force vector of the element | | $f_{th}$ | thermal force vector of the FE-mesh | | $f^e_{ext}$ | external mechanical force vector of the element | | $f_{ext}$ | external mechanical force vector of the FE-mesh | | $f_s$ | spatial distribution of the HTC | | $H(s)$ | transfer function of the original system | | $\tilde{H}(s)$ | transfer function of the reduced system | | $\beta$ | heat transfer coefficient | | $\beta_{AG}$ | heat transfer coefficient of the air gap | | $I$ | identity matrix | | $K^e_{cond}$ | thermal conductivity matrix of the element | | $K_{cond}$ | thermal conductivity matrix of the FE-mesh | | $K^e_{conv}$ | thermal convection matrix of the element | | $K_{conv}$ | thermal convection matrix of the FE-mesh | | Symbol | Description | |--------|-------------| | $K^e$ | stiffness matrix of the element | | $K$ | stiffness matrix of the FE-mesh | | $K_{th}^e$ | thermo-mechanical coupling matrix of the element | | $K_{th}$ | thermo-mechanical coupling matrix of the FE-mesh | | $K_r$ | Krylov linear subspace of dimension $r$ | | $L$ | length of a trajectory | | $M_{th}$ | thermal compliance matrix | | $M_{mech}$ | thermo-mechanical compliance matrix | | $m$ | number of inputs | | $M_{mech}$ | mechanical torque | | $n_e$ | ansatz function of the element | | $n$ | dimension of the original system | | $n_h$ | number of harmonics | | $n_{dist}$ | number of distributed interfaces | | $n_{me}$ | number of moments for bilinearization | | $n_g$ | number of samples of the HTC | | $n_r$ | rotational speed of the bearing in rpm | | $\mathcal{P}$ | reachability Gramian | | $p$ | parameter vector | | $p$ | dimension of the reduced system | | $P_{ax}$ | electrical power supplied to the axis | | $P_{mech}$ | mechanical power | | $P_m$ | electrical power supplied to the motor | | $\mathcal{Q}$ | observability Gramian | | $Q_l$ | rotation matrix for the sample $l$ | | $q_{ext}$ | thermal heat input vector of the element | | $\dot{Q}_{ag}$ | heat losses at the air gap | | $\dot{Q}_{amp}$ | heat losses at the amplifier | | $\dot{Q}_b$ | heat losses at the bearing | | $\dot{Q}_{cool}$ | cooling power | | $\dot{Q}_{c-t}$ | heat losses at the rotor | | $\dot{Q}_{st}$ | heat losses at the stator | | $r$ | number of output | | $s$ | variable of a trajectory | | $s_c$ | position along a trajectory | | $s_e$ | expansion point | | $s_0$ | expansion point | | $S_i$ | first order Sobol index of the parameter $i$ | | $S_{Ti}$ | total Sobol index of the parameter $i$ | | $T$ | temperature structural field | | $\bar{T}$ | average temperature | | $T_{ext}$ | temperature of an external fluid field | | $T_{in}$ | temperature of the inlet | | $T_{out}$ | temperature of the outlet | | $T_{ref}$ | reference temperature | | Symbol | Description | |--------|-------------| | $U$ | matrix with right singular vectors | | $u$ | input vector | | $u_s$ | structural displacement | | $V$ | projection basis | | $V_{KMS}$ | projection basis of the KMS | | $V_k$ | basis of the Krylov subspace | | $V_\mu$ | basis of the modal subspace included in the KMS | | $V_l$ | rotated projection basis | | $V$ | linear subspace | | $V_{dist}$ | reduced subspace from bilinearization of te distributed interfaces | | $Y_k$ | Krylov linear subspace | | $\dot{V}$ | volumetric flow | | $V_\mu$ | linear subspace of modes considered for the KMS | | $V_{\bar{\nu}}$ | linear subspace of modes not considered for the KMS | | $Y_{\bar{\nu}}$ | Krylov linear subspace of $V_{\bar{\nu}}$ | | $W$ | projection basis | | $w^e$ | nodal values of the element | | $x$ | state vector | | $\bar{x}$ | reduced state vector | | $Y$ | matrix with left singular vectors | | $y$ | output vector | | $z$ | position vector | ### Greek Symbols | Symbol | Description | |--------|-------------| | $\alpha$ | thermal expansion coefficient | | $\Gamma_1$ | area of the Neumann boundary condition | | $\Gamma_2$ | area of the Robin boundary condition | | $\varepsilon$ | strain tensor | | $\varepsilon$ | maximum error of the reduced system | | $\eta_{amp}$ | efficiency of the amplifier | | $\eta_{mot}$ | efficiency of the motor | | $\theta^e$ | temperature at the nodes of the elements | | $\Theta_{nx}$ | inertia of the axis | | $\lambda$ | thermal conductivity | | $\lambda_{AG}$ | thermal conductivity of the air gap | | $\lambda_{TCC}$ | thermal contact conductivity | | $\nu$ | Poisson number | | $\rho$ | density | | $\Sigma$ | singular value matrix | | $\sigma$ | stress tensor | | $\sigma_e$ | elastic stress tensor | | Symbol | Description | |--------|-------------| | $\sigma_{ij}$ | thermal stress tensor | | $\Phi$ | eigenvector matrix | | $\phi$ | eigenvector | | $v^e$ | displacement at the nodes of the element | | $\omega$ | frequency | | $\omega_{thr}$ | maximum eigenfrequency of the KMS method | | $\omega_{thr.o.e.s}$ | maximum frequency of considered by the error estimator | | $\Omega$ | eigenvalue matrix | | $\Omega$ | domain of the PDE | | $\Omega^e$ | domain of the element | Introduction Current industrial applications increasingly demand parts with tighter tolerances. Advances in many technologies rely on the capability to manufacture more accurate parts in order to fulfill demanding product specifications. As an example, the future of sustainable mobility depends on the accurate and efficient production of electrical drives. However, the efficiency of the drives and supplied torque depend directly on the manufacturing tolerances of the air gaps [120]. Therefore, advances in precision of machine tools are essential to enable future engineering applications. The improvement of accuracy is a driving force of research and development of machine tools. However, the increase in accuracy is only sustainable if the required resources are minimized. Additionally, the productivity level needs to satisfy the demand of the costumers. Thus, the industrial sector and the academic community are devoting a great effort to build a next generation of machine tools that meet the demands of an increasingly competitive market. Innovative machine tool concepts are required to improve the balance between accuracy, productivity, and energy efficiency. The challenges of the manufacturing industry require a new generation of accurate machine tools and measurement systems, as stated by Wegener et al. [123]. The new machine tool concepts need to be created by means of systematic investigation and scientific design approaches. In the machine tool industry, it is customary to rely on several physical prototypes during the development phase of new machine tools. The main disadvantage of physical prototypes is that they are resource-intensive. Virtual prototypes are therefore a more efficient way to test the different design alternatives. They are based on numerical models capturing the physical behavior the machine tool. These models enable the understanding of the causes leading to errors in manufactured parts. In order to understand the potential of virtual prototypes, this section presents a motivating example. Figure 1.1 shows a virtual prototype of a 3-axis reconfigurable machine tool (RMT). This machine tool virtually performs a circular movement of 200 mm of diameter using the X- and Y-axis. The circular test is used to visualize the effect of different error sources. Geometric errors is the first error source considered, which refer to the position and orientation errors originated by geometry of the guiding systems or assembly errors. As an example of geometric errors, the squareness between the X and Y axis is considered and depicted in Figure 1.2a. The static effects are the second error source, referring 1 Introduction to forces that do not change over time. One example of static loads is gravity. Figure 1.2b illustrates the resulting deviation between tool and workpiece due to gravitational forces. Dynamic effects are the third error source under consideration. The acceleration of the mechanical structure generates inertial forces, leading to displacements between the tool and workpiece. The resulting contour errors due to the acceleration of the X and Y axes is shown in Figure 1.2c. The thermal behavior of machine tools is the fourth error source considered. The heat losses in the machine elements and environmental temperature fluctuation lead to a time-varying, inhomogeneous temperature field of the machine tool structure. This temperature field results in structural displacements, leading to displacements between the tool and the workpiece. Figure 1.2d shows the variation of the contour errors over 12 hours under environmental temperature fluctuations for the machine tool structure shown in Figure 1.1. The motivation behind this thesis is the creation of efficient physical models representing accurately the behavior of machine tools. This work introduces methods for the systematic evaluation of the behavior of machine tool and other complex mechatronic systems. Among the different error sources, this work focuses on thermal error sources. Thermal issues of machine tools account for the largest share of geometrical errors in manufactured parts, as pointed out by Bryan [27] and Mayr et al. [81]. Thermal errors arise from internal sources as well as from external sources. The internal heat sources occur due to frictional losses at the machine elements, such as bearings or ballscrew nuts. External sources refer to the interaction of the machine tool with its surroundings, such as fluctuations of the environmental temperature or introduction of metal working fluid into the working space. The virtual prototypes, such as the motivating example of Figure 1.1, are a great asset to understand the mechanisms leading to errors in manufactured workpieces. They enhance the metrology explaining the causes of the measured position and orientation errors. Furthermore, this information can be used to ensure an accurate and repeatable machine tool design. The use of physical models is not limited to the design, it can be also used during the utilization phase of the machine tool. Once accurate and robust virtual prototypes are available, they can predict the errors of the manufactured parts and compensate these errors real time during the manufacturing process. These models can estimate whether a manufacturing system is capable of meeting the specifications required to produce a specific part. Alternatively, virtual prototypes can determine the process parameters or limit external disturbances so that the final workpiece meets the required manufacturing tolerances. The virtual prototypes are essential to achieve the fully compensated machine tool. However, there are still challenges associated with the development of models of machine tools based on the physical description. On one hand, the complexity of the systems under consideration leads to models that are Figure 1.2: Simulated errors in the XY plane in a circularity test of 200 mm diameter 1 Introduction computationally expensive to evaluate. On the other hand, there is a lack of systematic approaches with a dedicated framework for developing physical models of machine tools. Therefore, this work focuses on the development of computationally efficient thermal error models of machine tools. State of the Art This chapter presents the state of the art of efficient thermal models of machine tools. Section 2.1 presents a review of the current approaches to deal with thermal issues in machine tools. This section distinguishes two main approaches: design optimization and compensation. The design optimization is usually supported by virtual prototypes based on physical models. Section 2.2 summarizes the state of the art of thermo-mechanical models of machine tools. Section 2.3 reviews the current approaches on thermal error compensation. Efficient modeling approaches are required in order to use thermo-mechanical models to improve the thermal behavior of machine tools. Efficient physical models rely on numerical methods that reduce the computational effort. The creation of surrogate models by means of projection-based model order reduction (MOR) offers the possibility to create computationally efficient models. MOR enables applications that require large number of model evaluations or real time capabilities. Section 2.4 reviews the MOR techniques available in the literature. The application of MOR to thermal error models of machine tools is reviewed in Section 2.5. The state of the art is discussed in Section 2.6, leading to the identification of the research gaps in Section 2.7. 2.1 Thermal issues in machine tools Machine tools, such as milling or grinding machines, are complex mechatronic systems with a great industrial relevance. Innovative solutions in the machine tool sector enable the development of novel manufacturing processes and are key to foster industrial innovation. Großmann [50] summarized the three conflicting goals of manufacturing technologies: productivity, precision, and minimization of resources. The manufacturing of high added-value products demands accurate 3D shapes with tolerances in the micrometer and submicrometer range. Therefore, the precision requirements of modern machine tools are higher than the requirements for many other industrial systems. Schwenke et al. [108] reviewed the main errors sources contributing to the precision of machine tools: kinematic errors, thermo-mechanical errors, loads, dynamic forces, and motion control and control software. The combination of these error sources leads to a deviation of the nominal position between the tool center point (TCP) and the workpiece, resulting in manufacturing errors in the part. From the different error sources affecting the geometric accuracy of machine tools, this work focuses on the thermo-mechanical errors. The review paper of Bryan [27] as well as the update of Mayr et al. [81] highlighted the importance of the thermal error sources as one of the main contributors to geometric errors in manufactured parts. Putz et al. [102] presented a survey studying the industrial relevance of thermal issues of machine tools. The authors highlighted the increasing awareness of the industry on thermal issues as one of the most limiting factors in the resulting machine tool accuracy. The chain of causes leading to the thermal errors in machine tools is summarized in Figure 2.1. The heat losses of the machine elements and external devices lead to an inhomogeneous, time-varying temperature distribution. The temperature gradients result in a structural deformation, inducing position and orientation errors between the TCP and the workpiece. Thermal errors can be understood as variation of the reference geometry of the machine tool and the workpiece, measured at a homogeneous reference temperature. Bryan [27] outlined the different heat sources in production machines and measurement systems and classified the sources as either internal (e.g. drives, bearings) or external (environment, process, cutting fluid, and people). ![Figure 2.1: Chain of causes of thermal error in machine tools, adapted from Ess [39]](image) Thermal issues are a focus of the academic and industrial research. From the academic side there is a significant increase in the number of publications on this topic over the last years. From the 410 publications internally reviewed, total of 205 are published between the years 2012 and 2017. The industry is also devoting research resources to thermal issues of machine tools in order to increase the overall precision and remain competitive in the market. Academic and industrial efforts to reduce thermal errors in machine tools focus on two strategies: reduction of the effects and minimization of the causes. The reduction of effects consists on developing compensation strategies that predict the thermally induced displacements. The minimization of the causes is handled by design methods, optimizing the structural design and heat flow in the structure. For design strategies, thermo-mechanical models are a great asset, serving as virtual prototype to test different design modifications. ### 2.2 Physical thermo-mechanical models in machine tools One alternative to reduce the thermally induced displacements is minimizing the causes from the design phase of the machine tool. Virtual prototypes, based on physical models, can be created at early stages of the design phase when the physical system is still not available. They provide a great flexibility to test the feasibility of design modifications and are a useful tool to optimize the performance of the system. In addition, physical models are a valuable tool to understand the current machine tool behavior and design new thermal error compensation strategies. Thermo-mechanical models of machine tools are based on the discretization of the heat transfer and elasticity equations. The heat transfer equations describe temperature distribution of the machine tool as well as the heat exchanged with the surrounding environment. These equations define the dynamic evolution of the system. The stationary linear elasticity equations provide the mechanical response of the system, where the temperature distribution is introduced as a thermal strain. Numerical methods, such as finite element method (FEM) or finite differences method (FDM), allow solving numerically the partial differential equations (PDE) by transforming them into a system of ordinary differential equations (ODE). The focus of a physical thermo-mechanical model can be the characterization of the behavior of a particular machine tool element, such as ballscrews feed drive systems (Shi et al. [109]) or main spindles (Mori et al. [86]). These models aim at the characterization and optimization of a machine element, without considering the interaction with the whole machine tool assembly. Another approach is the development of models considering the whole machine tool assembly and the interactions with the surrounding external influences. Sun et al. [117] investigated the effect of internal heat sources in a 3-axis precision grinding machine. The thermo-mechanical model focused on the effects of the heat dissipation of the linear drives and the spindle system. The model of the original design was validated with temperature measurements after reaching the steady state. This work proposed a design optimization of the structural parts close to the heat sources modifying the heat transfer direction and enhancing the heat dissipation. The design modifications were tested directly with the virtual prototype. The authors showed that the new design led to a more homogeneous temperature distribution in the machine tool structure. Other authors also investigated the effect of external influences, such as the environment or cutting fluid, on the thermal response of the machine tool. Mian et al. [83] considered the environmental effects in a thermal FEM model of a three-axis vertical milling machine. The authors stated that the initial state of the machine is usually unknown and therefore there is always a discrepancy between the first hours of the simulation and the measured values. Mian et al. defined the settling time as the required time in which the error associated to the unknown initial conditions fade out. In their investigations a the settling time was about 12.5 h. Shi et al. [110] developed a thermo-mechanical model to predict the thermal deviations in a gear grinding machine tool. The authors investigated the effect of internal heat sources, i.e. the permanent magnet of the synchronous motorized spindle, the roller bearings, ballscrews and guideways. This work focused on evaluating the thermally induced errors after reaching a thermal steady state. The viscosity of the lubrication and the preload of the bearing varies with temperature, leading to a temperature dependent heat generation. Defining temperature dependent thermal loads leads to a non-linear system, resulting in an iterative solver to determine the steady state temperature distribution in the structure. Shi et al. also considered the effect of the process heat and cutting fluid on the thermal response. The authors estimated that 30% of the heat produced during the grinding process is removed by the cutting fluid, whose temperature thus increases. This work used empirical values of the heat transfer coefficients (HTC) in order to characterize the convective heat transfer between the structure and the fluid. The thermo-mechanical model was validated comparing the temperature values at the steady state. Shi et al. concluded that the validated model can be used for further design improvements of the current machine tool design. Weng et al. [127] investigated the thermal volumetric errors of a machine tool considering the environment, the electronic cabinet, and the hydraulic pump as thermal influences. The authors developed a CFD model accounting for the convective and radiative heat exchange between the heat sources and the machine tool structure. The multiphysics simulation considered several locations of the heat sources where the steady state temperature distribution was computed. The thermally induced structural deformation was evaluated at a single machine tool position. The structural deformations at the guideways and other links were used to build a multibody simulation. Neglecting the position-dependent compliance of the machine tool, the authors extrapolated the calculated structural deformation to the whole working space of the machine tool. The reviewed works so far make use of commercial software packages in order to evaluate internal or external thermal effects for specific machine tools. The other approach is the development of dedicated simulation environments for the evaluation of the thermo-mechanical behavior of machine tools. Mayr [78] used the finite difference element method (FDEM) in order to compute efficiently the thermal errors of machine tools. This method is schematically shown in Figure 2.2. The numerical integration of the system was performed with an adaptive time step, in order to compute abrupt changes of the loads of the system. Mayr proposed a substructure approach of the system output, reducing the computational effort to compute the TCP-displacements and orientation errors. The thermal errors were calculated for the whole working volume for linear and rotary axes. Similarly to the work of Bringmann [24], who investigated geometric errors in machine tools, the component and location errors were derived from the simulated thermally induced volumetric errors in the working space. Mayr et al. [82] applied the FDEM to the design improvement of a machine tool frame. Figure 2.3 illustrates the initial machine tool design (right column) and the new machine tool design (left column). The new concept modified the geometry of the machine tool frame ensuring a thermo-symmetric design. The new design concept was tested under different load cases. Figure 2.3 shows the temperature distribution after 24 h of the machine tool structure for a heat load homogeneously distributed around the working space (first raw) and a heat source located on the right column (second raw). Mayr et al. showed that a symmetric design reduces the thermally induced displacements in the working space. In addition, it avoids the introduction of angular errors, which are more complicated to incorporate in a thermal error compensation strategy. Ess [39] developed a software package, Virtual Machine Prototype (VMP), which connected the axes Figure 2.3: Effect of two different thermal load from Mayr et al. [82]: The left column depicts the thermal deformation of the initial design and the right column depicts the thermal deformation of the thermosymmetric design. The load case of the first row corresponds to heat source affecting the whole working area. The load case of the second row corresponds to a heat source located on the right column. through models representing different machine tool elements. The solution of thermo-mechanical problem was performed by FEM, with an Euler backwards integration scheme. In the computation of the TCP-displacement, Ess considered the effect of the thermal deformation of the linear glass scales and reader heads. The software provided a large library of machine elements, such as bearings or ballscrews, which were described the thermal conductivity, heat losses and mechanical stiffness. VMP allowed the computation of the thermal response of the machine taking into consideration the movement of the axes given by NC-code. In VMP simulation environment specialized analyses were developed, in order to characterize the thermal behavior of machine tools in frequency domain. Mayr et al. [79] studied the effect of the environmental temperature fluctuations in frequency domain. They evaluated the frequency response function of a 3-axis precision machine of the environmental temperature oscillation to the TCP displacements. The analysis was performed at different positions of the linear axes. The authors found out that certain frequencies maximized the thermally induced displacements, defining the concept of thermal resonance frequency. By modifying the interactions of the machine tool structure with the environment by means of isolation material, the authors could reduce the amplitude of the thermally induced displacements. **Boundary conditions in thermal models of machine tools** The boundary conditions in machine tools define the interactions between the different structural parts with the surrounding environment. An accurate estimation of the boundary conditions is a topic of active research. In many of the models found in the literature, it is customary to find some simplifying assumptions in order to describe the boundary conditions. Some works opt for a coarse approximation defining just one value of the HTC for all the surfaces of the domain. Other models define specific values of the HTC for each area of the boundary, according to empirical correlations. In order to evaluate the convective boundary conditions, one option is measuring directly the values of the HTC. Jedrzejewski et al. [67] determined experimentally the HTC due to forced convection in a rotating spindle. The work of Heisel et al. [53] focused on the experimental determination of the HTC on planes at different inclinations and flow regimes. The authors defined several empirical correlations of the Nusselt number in terms of the Rayleigh number and the inclination angle. The main goal of the research was to create a database which assists the thermal simulation. This database does not only contain the HTC for the natural convection but includes material properties, heat fluxes (developed for example in the spindle bearings), forced convection and radiation boundary condition. The work of Kohút et al. [69] developed a custom made probe for the determination of the HTC in machine tools. The probe consisted on three temperature sensors, from which the coefficients are calculated by local heat balance. The system is calibrated for known empirical formulas for forced convection in a wind tunnel. Another approach to estimate the HTC is to use empirical formulas available in the literature, which correlate the Nusselt number with other dimensionless numbers such as Prandtl or Rayleigh. The VDI-Heat Atlas [44] provides a compilation of the empirical correlations for general cases. Zwingenberger [134] used these empirical correlations for the heat transfer coefficients and applied them to the whole machine tool model. The work considered two types of boundary conditions, natural convection and radiation. The only convective heat transfer considered was the natural convection in open spaces, i.e. the influence of the machine housing was not taken into account. In this work, Zwingenberger implemented an automatic HTC calculation routine. It classified the different elements on the convective boundary conditions according to its orientation respect to the gravity vector. In addition, Zwingenberger considered air stratification, assigning a different environmental temperature depending on how far the element is from the foundation. In order to have an accurate estimation of the convective boundary conditions the enclosure of the machine tool needs to be considered. Pavliček et al. [98] showed experimentally that the measured thermal response of a machine tool is clearly affected by the influence of the machine housing. In order to have a estimation of the values of the HTC inside the enclosure, Pavliček et al. [96, 97] developed novel modeling strategy. Considering simplified geometries and localized heat sources, the authors simulated the convective heat exchange by means of computational fluid dynamics (CFD). The authors defined several dimensionless numbers characterizing the geometry of the enclosure and the heat source. The numbers were then correlated to a Nusselt number, which incorporated the combined effect of natural convection and radiation. Another important aspect in order to characterize the boundary conditions is an accurate estimation of the heat losses in the different machine elements. Thermo-energetic models characterize the energy demand of the whole machine tool assembly. By determining the energy efficiency of the machine elements, the heat losses can be also evaluated providing values for the boundary conditions of thermo-mechanical models. Züst [133] implemented a simulation platform EMod to evaluate the energy demand of machine tools and implemented models to characterize the behavior of several elements, such as bearings, synchronous motors or pumps. The values of the heat losses and contact conductivities can be exported to a text file and incorporated into a physical thermal model of the machine tool, as shown by Züst et al. [132]. The definition of the boundary conditions, i.e. convection and heat losses at the elements, define the thermal behavior of the model. However, the parameters associated to the description of the thermal boundary conditions of the model are not deterministic. Therefore, the thermo-mechanical models need to deal with the intrinsic uncertainty associated to the values of the boundary conditions. For the case of thermo-mechanical models of machine tool, the parameters describing the heat transfer between the machine tool and the surrounding fluids, namely the HTC, are one of the main sources of uncertainty. These values are affected by the conditions of the air flow inside the machine tool housing as well as outside the enclosure. These flow conditions are hard to be assessed. Therefore, the values of the HTC are exposed to a variability and uncertainty, which need to be considered during the modeling process. The concept of parameter uncertainty is usually linked to sensitivity analysis. During the model development and validation, it is useful to know how sensitive the outputs are to variations of the parameters describing the model. The sensitivity analysis determines which parameters are more relevant to describe the thermo-mechanical behavior of the system. Denkena et al. [36] introduced the concept of parameter uncertainty and sensitivity analysis applied to a thermo-mechanical model of a 5-axis machine tool. Figure 2.4 illustrates the kinematic configuration of the investigated machine tool. Denkena et al. concentrated on the evaluation of the sensitivity of the model to the variation of four different parameters, i.e. thermal expansion coefficient, thermal conductivity, HTC, and emissivity. The thermal model considered the steady state respond of the machine tool under homogeneous fluctuations of the environmental temperature. The outputs of the model were the ratio between the thermally induced deviation and the temperature variation. The structural deformations of the model were evaluated at 5 different point of the structure in X-, Y- and Z-direction, as illustrated in Figure 2.4. The thermo-mechanical model considered one homogeneous convective boundary conditions affecting the whole machine tool structure. The values of the nominal values of the HTC were calculated according to an empirical formula for slow air flow, leading to values between 2 to $12 \frac{W}{m^2K}$. Denkena et al. evaluated the thermo-mechanical model at 3 different values of the HTC, as shown in Figure 2.4. The authors concluded that the deformation of the Z-axis in Z-direction showed the largest sensitivity to variations of the HTC, leading to variation of the output results up 10 % of nominal value. ![Figure 2.4: Thermo-mechanical model of a 5-axis machine tool from Denkena et al. [36] and effect of the variation of the HTC in the TCP displacements relative to the workpiece](image) ### 2.3 Thermal error compensation The second alternative to reduce the thermally induced displacements is creating models that predict real-time the thermal errors. According to Wegener et al. [124], the compensation approaches can be classified depending on the sensor technology, the modeling approach, and the actuator technology. This classification is illustrated in Figure 2.5. Firstly, the information from the thermal state of the machine tool needs be collected. The numerical control (NC) or external sensors (e.g. temperature sensors or measurements at the TCP) can provide the required data. Secondly, a prediction model needs to be created that estimates the thermally induced displacements. Finally, an actuator compensates the resulting thermal errors using the predictions of the model. | Sensor technology | Modeling | Actuator technology | |-------------------|---------------------------|---------------------| | NC Program | Physical equations | Cooling, Heating | | Power supply | MOR | Machine axes | | Temperature | Thermo-balance | Additional axes | | measurements | model | | | Measurements from | Phenomenological | | | thermal elongation| model | | | Position | Artificial | | | measurement | intelligence | | *Figure 2.5: Thermal error compensation strategies, adapted from Wegener et al. [124]* The different approaches to predict the thermally induced displacements differ in their level of description of the physics. This section distinguishes four types of compensation models: black box static models, black box dynamic models, simplified physical models, and physical models. The black box static models are based on a time-independent correlation between inputs (e.g. temperature data) and outputs (e.g. TCP displacements). These models rely on artificial intelligence (AI) algorithms and other types of regression algorithms. The static models based on regression algorithms aim at predicting thermal TCP position and orientation errors using temperature measured at certain locations of the machine tool. Chen et al. [32] used polynomial regression in order to map the measured time-variant surface temperature to a volumetric thermal machine tool error model. The approach of polynomial regression has been extensively used by other researchers [37, 88, 131]. More complex algorithms have been proposed to map the surface temperatures to thermal TCP errors. Mou [87] presented a method based on artificial neural networks. Characteristic diagrams are another option for the description of thermally-induced errors from temperature data. Naumann et al. [91] developed characteristic diagrams based on kernel polynomial functions, which map several temperature measurements to an axial displacement of the machine tool spindle. Using as training data the results of a finite element (FE) model, the characteristic diagram succeeded in predicting the displacement of the TCP in Z-direction. Naumann et al. [91] considered exclusively one thermal load at a fixed position and with a homogeneous environment. The introduction of more than one position of the machine as a further input factor was regarded in the outlook of the work as well as the practical implementation in a machine tool. The second modeling approach black box dynamic model, also described in the literature as phenomenological models. The static compensation models rely on the quasi-static behavior of the thermoelastic process. They aim at predicting the structural displacements assumed that the temperature distribution on the machine is known. Yang and Ni [129] pointed out that the main drawback of the regression models is that describing the whole temperature field with a finite amount of temperature sensors is not enough. They described the concept of pseudo-hysteresis of the thermoelastic deformation, showing that the correlation of a temperature sensor data to the machine displacement is not unique. They developed a dynamic model with a multiple input single output (MISO) structure, where the inputs of the system were temperature sensors located close to the location of the heat losses. Numerical and experimental validation of the axial elongation of a spindle were carried out. The authors claimed that the model was able to predict 80% of the maximum error range, showing the validity of the approach. Yang and Ni [130] also proposed recursive adaptation of the dynamic model. The recursive adaptation of the model parameters, based on Kalman filter, allowed the compensation strategy to deal with long-term process variation. The authors conducted their experimental validation on a three-axis machining center where the thermal error in Z-direction was compensated. Horejš et al. [58] presented a phenomenological model based on a first order transfer function capable of predicting the displacement in Z-direction due to thermal effects of a 4-axis milling center. They considered as input parameters the temperature of the spindle, environment, and inlet and outlet of the coolant. Furthermore, they also selected as an input the rotational speed of the spindle. They succeeded in reducing the range of the Z-displacements over 3 days from 97 to 24 µm. The authors compared their approach with regression models based on eight temperature sensors distributed throughout the machine. The authors showed that the model based on a transfer function was more robust and accurate than the regression models. Gebhardt [42] concentrated on phenomenological models to compensate the thermally induced location errors of the rotary table and the swiveling axis of a five-axis machine tool. The position and orientation errors were measured with the R-Test, evaluated at four different positions of the rotary table. This work described each error with a first-order differential equation, whose input was the velocity of the axis, read out from the control of the machine tool. The NC-Code was generated arbitrarily and automatically, which facilitated programming the movement of the axes over the whole duration of the identification period. The author studied further the optimal time for the appropriate parameter identification. The author showed the statistical distribution of the residual of the compensation for different identification intervals. This study led to a compensation strategy, which corrected up to 85% of the thermal position and orientation errors of the rotary and swiveling axes requiring 48 h for the parameter identification. Mayr et al. [80] used the same approach to include the effect of the cutting fluid and the main spindle. Several examples, e.g. [22], of phenomenological models can be found in the literature. The third modeling approach is based models with simplified physics, such as thermo-balance models or lumped physics models. In comparison with static and dynamic models reviewed before, the thermo-balance models use a simplified physical description of the machine tool thermal behavior and empirically identifies the models parameters. Gebhardt [42] proposed a compensation strategy based on the concept of thermo-balance. The machine tool parts were models by simplified geometries, with homogeneous, lumped material properties. The heat exchange between the parts and the environment was considered. The values of the model parameters were identified in order to minimize the difference between the measured and predicted displacements. Gebhardt [42] succeeded in reducing the thermal displacements applying thermo-balance models to a 5-axis machine tool. As an example, the thermally induced deviation in Y-direction due to the rotation of the C-axis was reduced by 84%. Physical models are also used for thermal error compensation. These models, based on the discretized physical equations, predict thermo-mechanical behavior of the system. Mayr [78] explained the potential to compensate thermally induced errors at different positions of the working space based on the FDEM simulation. The methodology was introduced for a three-axis portal machine, without implementing an online compensation. Ess [39] outlined a thermal compensation strategy based on a physical model. The volumetric errors were computed at 27 positions every minute, while the thermal system was solved with 2.5 s time step. The TCP was corrected every 250 ms, interpolating the current thermal TCP-displacement from the simulated volumetric error at the 27 locations. Several temperature probes measuring the environment and the feed drives were the input of the model. Three displacement probes measured online the X-, Y- and Z-displacements of the TCP respect to the workpiece. In order to capture the position dependent errors, a cross-grid was also installed. The compensation could capture certain trends of the thermal errors, showing the potential of the physical models for online compensation. The challenges associated to implementing a compensation based on physical models were summarized by Thiem et al. [118]. The authors studied the different cycle times required for capturing the thermal loads, solving the thermal system of equations and computing the compensation values. The authors also explained theoretically the challenges associated to correcting not only thermally induced linear TCP-displacements but also orientation errors in the whole working volume of a machine tool with a more complex kinematic chain. 2.4 Model Order Reduction 2.4.1 Non parametric MOR The complexity of thermo-mechanical models of mechatronic systems leads to a FEM discretization with a large number of degrees of freedom. Therefore, the applicability of physical models is limited when a large number of model runs or real time capabilities are required. Surrogate models are computationally efficient models that reproduce the characteristic behavior of the high-fidelity physical model. Benner et al. [18] grouped the surrogate models into three different categories: data fit models, hierarchical models, and projection based models. The data fit surrogate models fit the model outputs to a functional depending of the model parameters. Kriging or polynomial chaos expansion (see Sudret et al. [116]) are examples of data fit surrogate models. The main advantage is that they are not intrusive, i.e. only an evaluation of the high fidelity model is required. On the other side, the main drawback of these surrogate models is that they are only valid for the values of the parameters that they are trained. Hierarchical models are based on simplifying the physics of the higher fidelity model. Lumped mass multibody simulation is an example of hierarchical models. Projection-based surrogate models or MOR are based on the projection of the high fidelity model in a lower dimensional subspace. All states of the reduced system are contained in this subspace, which provides the most relevant information about the dynamics of the system. The main advantage of projection-based model reduction is that it retains the system structure and allows the traceability of the dynamical evolution. Projection-based reduction use the underlying structure of the system enabling the derivation of error bounds of the surrogate model, as explained by Benner et al. in [18]. The main disadvantage compared to data fit models is that it is an intrusive method, as it requires the access and modification to the system matrices. MOR is increasingly used both in academia and industry enabling applications such as parameter identification [89, 92, 93], uncertainty analysis [33, 34, 35, 59], design optimization [6, 73, 74, 76], and real-time control of systems [7, 21, 119]. Figure 2.6 illustrates the concept of MOR. Let the state $x$ be the position of a particle in the space. In principle, $x$ can be at any point of in the space. This can be mathematically expressed as $x \in \text{span}(e_1, e_2, e_3)$, where $e_1$, $e_2$, $e_3$ are an orthonormal basis of $\mathbb{R}^3$. Due to the dynamics of the motion of the particle, the trajectory of this particle is restricted to the trajectory shown in Figure 2.6. MOR searches for the optimal plane defined by $\text{span}(v_1, v_2)$ where the trajectory of the particle is projected. This basis $\text{span}(v_1, v_2)$ is computed such that the difference between the original and projected trajectory is minimized. The concept shown in Figure 2.6 can be generalized to a linear time invariant (LTI) system, such as $$E\dot{x}(t) = Ax(t) + Bu(t)$$ \hspace{1cm} (2.1) where $A$ is the system matrix, $E$ is the mass matrix, $B$ is the input matrix, and $u(t)$ is the input vector. The output of the system can be defined as $$y(t) = Cx(t) \quad (2.2)$$ where $C$ is the output matrix and $y(t)$ is the system output vector. The projection based MOR consists of finding a basis $V$ such as $$x(t) \approx V\tilde{x}(t) \quad (2.3)$$ being $\tilde{x}(t)$ the reduced state. Substituting Equation (2.3) into Equation (2.1) and (2.2) $$EV\dot{\tilde{x}}(t) = AV\tilde{x}(t) + Bu(t) \quad (2.4)$$ $$\tilde{y}(t) = CV\tilde{x}(t)$$ By enforcing the Petrov–Galerkin condition, a basis $W$ can be found such that $$WEV\dot{\tilde{x}}(t) = WAV\tilde{x}(t) + WBu(t) \quad (2.5)$$ leading to the projected system matrices $\tilde{E}$, $\tilde{A}$, $\tilde{B}$, and $\tilde{C}$ and the following reduced system $$\tilde{E}\dot{\tilde{x}}(t) = \tilde{A}\tilde{x}(t) + \tilde{B}u(t) \quad (2.6)$$ $$\tilde{y}(t) = \tilde{C}\tilde{x}(t)$$ The different MOR techniques for LTI dynamical systems are well established, being already integrated in commercial FEM software packages. Antoulas [7] provides a comprehensive review of the reduction algorithms for LTI systems. Different MOR techniques construct the reduced subspaces according to different criteria. This section reviews the following MOR techniques to approximate large dynamical systems: proper orthogonal decomposition (POD), balanced truncation (BT), and moment matching (MM). Proper Orthogonal Decomposition (POD) POD is a MOR method applicable to both linear and non-linear systems. A description of this method can be found in Kunisch and Volkwein [70]. POD evaluates the original system in time domain at different time steps, called snapshots. These snapshot are stored in a matrix $X$. A singular value decomposition (SVD) of the snapshot matrix as follows $$X = U \Sigma Y^T$$ \hspace{1cm} (2.7) where $\Sigma$ is the singular value matrix and $U$ and $Y$ contain the right and left singular value vectors. The reduction basis $V$ is constructed with the singular vectors of the snapshot matrix $X$. For the projection matrix $V$, the singular vectors that correspond to the largest singular values are chosen. The truncation criteria is that the squared sum of the singular values in the reduced system is similar to the singular values of the snapshot matrix. $$\frac{\sum_{i=1}^{r} \sigma_i^2}{\sum_{i=1}^{n_s} \sigma_i^2} \simeq 1$$ \hspace{1cm} (2.8) being $r$ the size of the reduced system, $n_s$ the number of snapshots, and $\sigma_i$ the singular values. This truncation criterion implies that the energy captured by the POD is similar to the full system. The choice of the snapshots is critical to the quality of the reduced model, as these snapshots need to contain all the information of the behavior of the system. Balance Truncation (BT) BT originates from the linear system theory, from concepts as controllability and observability. According to Antsaklis and Michel [8], a system is controllable if $\forall x_0 \in \mathbb{R}^n$ there exits an input $u$ that steers the system from $x_0$ at $t_0$ to $x_1$ at $t_1$. The subspace that contains all controllable states is the controllability subspace, which is the range of the reachability map $L_r$, namely $$L_r : L^2([t_0, t_1], IR^m) \mapsto \mathbb{R}^n$$ $$L_r : u \mapsto \int_{t_0}^{t_1} e^{-E^{-1}At} E^{-1} Bu(t) dt$$ \hspace{1cm} (2.9) The reachability maps from the Hilbert space of input functions $L^2([t_0, t_1], IR^m)$ to an finite dimensional space, $\mathbb{R}^n$. In order to calculate the range of the reachability subspace, the $P$ reachability Gramian is defined as $$P = \int_{t_0}^{t_1} e^{E^{-1}At} E^{-1} BB^T E^{-T} e^{A^T E^{-T} t} dt$$ \hspace{1cm} (2.10) which is a map from two finite dimensional spaces. According to the finite rank lemma [70], the range of the reachability map is the same as the range of the reachability Gramian. Considering the principle of duality between reachability and observability, the observability Gramian $Q$ can be analogously defined. In practice, the Gramians are calculated solving the Lyapunov equations instead of evaluating the integrals. The Lyapunov equations, which have implications in deriving the system stability, are as follows $$AP + PA^T + BB^T = 0$$ \begin{equation} A^T \mathcal{Q} E + E^T \mathcal{Q} A + C^T C = 0 \end{equation} BT selects those states the more observable and controllable states, i.e. those states that require the least energy to be controlled and provide the most energy during the observation. From the Gramians, the Hankel singular values can be evaluated, which are the square root of the eigenvalues of $\mathcal{P} \mathcal{Q}$. The states that are linked to the highest Hankel singular values are the ones considered forming the reduced subspace by means of BT. The main advantage of this method is that it preserves stability and it provides a theoretical error bound of the reduced output. The main disadvantage is solving the large-scale Lyapunov equations, especially for original models with a large number of degrees of freedom. **Moment Matching (MM)** Several MOR approaches are based on matching moments of the transfer function of the original system. The transfer function $H(s)$ can be derived applying the Laplace transformation to the LTI system of Equation (2.1) as \begin{equation} H(s) = C(Es - A)^{-1}B \end{equation} The transfer function $H(s)$ can be approximated as a Neumann series, i.e. a sum of infinite terms around the expansion point $s_0$ as \begin{equation} H(s) = \sum_{j=0}^{\infty} C(- (s_0 E - A)^{-1}E)^j (s_0 E - A)^{-1}B (s - s_0)^j \end{equation} This formulation of the transfer function can be interpreted as a Taylor expansion series. As explained by Salimbahrami and Lohmann [106], terms of the infinite series are called moments around $s_0$ and are used to describe the similarity between the original system and the reduced system. The Padé approximation relies on matching the $r$ first moments of the transfer function, that is, at $s_0$ the transfer function of the reduced system matches the first $r$ derivatives of the transfer function of the original system. When the expansion point $s_0 = 0$, it is usually called moment matching. Multipoint moment matching or multipoint rational interpolation refers to approaches where several moments are matched at several expansion points. In order to understand how to calculate the reduction basis $V$ and $W$, first the concept of Krylov subspace needs to be reviewed. According to Saad [104], given a matrix $P$ and a non-trivial vector $v$, the subspace \begin{equation} K_r \equiv \text{span}\{q, Pq, P^2q, \ldots, P^{r-1}q\} \end{equation} is defined as the Krylov subspace of dimension $r$. The Krylov subspace methods are widely used in different areas of numerical analysis, such as eigenvalue problems or iterative methods for solving linear systems [104]. The application of Krylov subspaces to MOR comes from the fact that building the projection matrix $V$ as \begin{equation} \text{span}(V) = K_r \{(s_0 E - A)^{-1}E, (s_0 E - A)^{-1}B\} \end{equation} leads to matching the first $r$ moments around $s_0$, as explained by Salimbahrami and Lohmann [106]. Similarly for the projection matrix $\mathbf{W}$ considering the output matrix. \begin{equation} \text{span}(\mathbf{W}) = \mathcal{K}_r \{(s_0 \mathbf{E} - \mathbf{A})^{-1} \mathbf{E}, (s_0 \mathbf{E} - \mathbf{A})^{-1} \mathbf{C}^T\} \end{equation} The most common algorithm for computing an orthonormal basis of the Krylov subspace is Arnoldi or Lanczos (see Saad [104]). These methods can be understood as modified Gram-Schmidt algorithm, where the next vector of the basis is computed such as that it is orthonormal to the other vectors of the basis. As a generalization of multipoint rational interpolation methods, Gallivan et al. [41] proposed a MOR technique via tangential interpolation. Tangential interpolation constructs a reduction basis such that the reduced transfer function $\tilde{\mathbf{H}}(s)$ interpolates tangentially the original transfer function $\mathbf{H}(s)$ at several frequencies. A left and right tangential direction need to be selected, namely $r$ and $l$. Baur et al. [12] showed that given a right tangential direction $r$, the following relationship is satisfied \begin{equation} (s_0 \mathbf{E} - \mathbf{A})^{-1} \mathbf{B} r \in \text{Range}(\mathbf{V}) \implies \mathbf{H}(s_0) r = \tilde{\mathbf{H}}(s_0) r \end{equation} Similarly for the left tangential direction. One advantage of this method is that by adding of one vector to $\mathbf{V}$ and $\mathbf{W}$ that interpolates tangentially the transfer function the derivative of $\mathbf{H}(s)$ at $s_0$ is matched automatically. There could be several choices for the tangential direction such as the singular vectors associated to the highest singular values of the matrices. Gugercin et al. [51] developed a method for selection of the optimal expansion points and tangential direction named iterative rational Krylov algorithm (IRKA). The method searches iteratively for the expansion points and tangential direction in a $\mathcal{H}_2$ optimal sense. ### 2.4.2 Error estimation The error of the developed reduced models needs to be efficiently estimated. It is a common practice to evaluate the true error by evaluating the transfer function for a large number of frequencies and comparing the transfer function of the full model to the reduced system. This true error evaluation can be a computationally expensive task when dealing with systems of high order. In order to avoid the computational costs associated to computing the true error, an error estimator can be defined. Bechtold et al. [14] use the difference in frequency domain of two reduced models of successive order. The authors showed that the true and the estimated error have similar behavior for frequencies close to the expansion point of the Arnoldi algorithm. Grimme [49] proposed also the comparison of two different reduced models, which had the same number of expansion points and different expansion frequencies. It was assumed that a small difference between the two reduced systems indicates that the true errors are also small. The residual can be used as an alternative to the previously mentioned error estimation, which was based on comparing two reduced systems. Grimme [49] proved that the error can be expressed in terms of the residual and therefore a small residual at a certain frequency $s_0$ typically implies a small error. Bui-Thanh et al. [28] also stated that the squared norm of the residual can be used as an a priori convergence indicator. If the basis of the reduction is increased, this leads to a decrease in the residual and consequently in the output error. The authors used these results for the development of an adaptive sampling method using constrained optimization, namely greedy sampling algorithm [121, 122, 47, 48]. Wolf et al. [128] proposed a $\mathcal{H}_2$ error bound for reduced models based Krylov subspace methods. The authors showed a way to factorize the error into two terms, enabling an error bound based on the calculation of the observability Grammian $Q$. The developed error estimator was demonstrated for SVD-Krylov methods with a numerical example. ### 2.4.3 Parametric MOR A large number of physical parameters, such as material properties, describe the models of mechatronic systems. However, after the reduction the values of the parameters are fixed and cannot be longer modified in the reduced system. Therefore, MOR techniques of dynamical systems that enable the modification of the model parameters after reduction, parametric MOR, have become an active research topic. Let $\mathbf{p}$ be a set of parameters of interest in the model. The LTI representation of Equation (3.12) can be now expressed as $$\dot{\mathbf{x}}(t) = \mathbf{A}(\mathbf{p})\mathbf{x}(t) + \mathbf{B}(\mathbf{p})\mathbf{u}(t)$$ $$\mathbf{y}(t) = \mathbf{C}(\mathbf{p})\mathbf{x}(t)$$ The parametric MOR techniques search for a reduction basis $V$ and $W$ such that the parametric dependency is still present in the reduced system as $$\tilde{\mathbf{E}}(\mathbf{p})\tilde{\mathbf{x}}(t) = \tilde{\mathbf{A}}(\mathbf{p})\tilde{\mathbf{x}}(t) + \tilde{\mathbf{B}}(\mathbf{p})\mathbf{u}(t)$$ $$\mathbf{y}(t) = \tilde{\mathbf{C}}(\mathbf{p})\tilde{\mathbf{x}}(t)$$ Benner et al. [18] reviewed the state of the art and challenges associated with parametric MOR. In their review paper, Benner et al. classified the parametric reduction approaches into two groups, namely MOR with local basis at several parameter points and MOR with a global basis over the whole parameter space. The first group of parametric MOR techniques construct different bases at different values of the parameters distributed in the whole region of interest of the parameter space. The system can be evaluated when parameters take different values than those used for the calculation of the local bases. In this case, an interpolation between the different systems is required. This can be done by interpolating the local subspaces (see Amsallem and Farhat [5]), interpolating the locally reduced system matrices (see Panzer et al. [94]) or interpolating the transfer functions (see Baur et al. [13]). The quality of the reduction depends directly on the how the parameters are sampled in order to construct the local bases. For small to medium size of the parameter space latin hypercube can be used, while for high dimensional parameter space more sophisticated, problem aware algorithms can be used, such as greedy sampling (see [38]). The second group of parametric MOR techniques constructs a single set of $V$ and $W$ for all the values in the parameter space. One option for constructing a global reduction basis is the concatenation of the local basis, which are derived similarly as in the MOR approaches with local bases. The rank revealing QR factorization follows the concatenation of the local bases. The other option is the creating a global basis by means of bilinearization. The bilinearization process consists in considering an affine representation of the parameter dependency of the system matrix as $$\mathbf{A}(\mathbf{p}) = \mathbf{A}_0 + \sum_{i=1}^{P} f_i(\mathbf{p})\mathbf{A}_i$$ where $f(\mathbf{p})$ is a function that depends on the parameters of the system. The parametric system of Equation (2.18) can be expressed as \[ \dot{x}(t) = A_0 x(t) + \sum_{i=1}^{P} f_i(p) A_i x(t) + Bu(t) \] (2.21) This system can be interpreted as bilinear system with \(f(p)\) considered as new inputs to the system. Bilinear is a nonlinear system of a particular form, namely they are linear in the state and input and at the same time have a nonlinear term. Phillips [100, 101] investigated the reduction of bilinear systems with application to RC-circuits. The author applied rational interpolation to bilinear systems using functional series expansion. Bai and Skoogh [10] generalized the work of Phillips constructing the reduction basis in a way that it matches a desired number of moments of the bilinear system. Breiten and Damm [23] continued the generalization of the reduction of bilinear systems with Krylov subspace methods including expansion points at different values than zero. Benner and Breiten [17, 16] combined the previous work on bilinear systems with IRKA, creating the bilinear iterative rational Krylov algorithm (BIRKA). This reduction method proved to be optimal in \(H_2\) for parameter-varying systems. Bruns and Benner [25] applied this method to a reduced thermal model of an electric motor, where the parametric dependency of several convective boundary conditions and contact thermal resistance was studied. ### 2.5 Application of MOR for mechatronic systems In the literature, several works apply the concept of projection-based MOR for thermo-mechanical models of machine tools. Galant et al. [40] developed a reduced thermal model of a milling machine column. The transformation matrices for the projection into a Krylov subspace were handled by the classic Arnoldi process. In order to deal with the variable position of the heat load due to the movement of the machine tool axis, the guideways were divided into certain segments. For each segment a reduced system was computed and for intermediate positions the results were linearly interpolated. In order to calculate the thermally induced displacements, the authors reconstructed the full temperature field and multiplied it by the output matrix. The preprocessing of the model as well as the division of the segments was automatically done in Ansys, while the MOR and the solution of the system were implemented in Matlab. The authors concluded that the temperature and the displacements of the machine column simulated with the reduced model showed good agreement with the results of the full model. Lang et al. [72] studied the reduction of models with moving thermal loads, applying their results to the same study case of Galant et al. They compared two reduction methods to deal with the parametric dependency of the system matrices due to the movement of the axes. Firstly, they introduced the concept of switched linear system (SLS) approach. Similarly to Galant et al., they divided the guideway into different segments and computed the reduced system matrices for each discretized position of the axis. In contrast to the previously presented work, the authors opted for BT as their reduction method. With this reduction method, they computed two projection matrices (\(V\) and \(W\)) for each possible position of the axes. The output of the reduced model was the displacements of 9 nodes. As an alternative to the reduction techniques with local projection matrices presented so far, Lang et al. considered the structural variability as a continuous parameter. The thermal system matrices expressed the parametric dependency by an affine representation. The projection matrices were calculated using the IRKA, which considered information at different sampling points of the parameter space. They implemented the continuous parametric dependency of the position of the axes by dividing the guideway into horizontal layers, which were coincident with the mesh of the contact area. In order to preserve the stability they used a one-sided projection. The SLS and the parametric reduction via IRKA were compared to the full model. Partzsch et al. [95] focused on the moving heat fluxes on stationary structures. The motivation for their research was moving parts in thermo-mechanical models of machine tools. Partzsch et al. claimed that the course time integration of moving loads leads to systematic errors of the heat that is input to the system. They proposed a correction approach of the heat fluxes from continuous motions increasing the accuracy of integrations with coarse time steps. In their work, the authors considered exclusively Neumann boundary conditions, which correspond to the heat losses in the machine elements. They neglected heat introduced due to the contact between the parts. Naumann et al. [90] reviewed the thermo-mechanical modeling approaches with structural motions and compared their performance. The performance criteria of the methods were accuracy, real-time capability, and memory demand. The authors compared the results of MOR by IRKA presented by Lang et al. [72] with commercial FE-software and an open-source full FE-model using standard target and contact elements. The study case for the performance test was a thermo-mechanical model of a column stand with a moving headstock over 16.5 h with a varying velocity profile. The author showed that the errors between the methods are below 10% while only the reduced models enable the tractability of these models for real-time application purposes. Efficient thermo-mechanical models of machine tools are used also for applications requiring large numbers of model evaluations. An illustrative example is the problem optimal temperature sensor placement, which aims at selecting the optimal place of temperature sensors maximizing the information about the thermal state of the system. The structural temperature distribution can be estimated from the optimal measured temperatures and subsequently the TCP errors. Herzog et al. [54, 55] used POD for reducing the full FE model of a machine tool column. In their work, the authors considered a fixed position of the heat loads, leading to a time-independent optimal placement algorithm due to the linearity of the problem. The methodology placed the sensors sequentially, in order to maximize the information by adding the next sensor. This work stated that the optimization of the sensor location needs to focus on the reconstruction of the TCP-displacement instead of the temperature field. Benner et al. [19] continued the research on optimal sensor placement by comparing the performance of different reduction methods. The authors focused on the same study case of the machine tool column introduced in Herzog et al. The evaluated MOR approaches were POD, BT, and two MM methods, i.e. Padé approximation and IRKA. The authors explained that for all the reduction approaches the larger the number of degrees of freedom of the reduced model the larger the number of coefficients that need to be estimated to map temperatures to displacements. Thus, reduced models with a small number of degrees of freedom showed the best performance. Comparing the different reduction approaches, the best performance was achieved with POD in terms of the TCP prediction accuracy for loads inside the POD training set. However, if noisy temperature data or the thermal loads differ from those of the POD training set, IRKA or BT achieved better or comparable performance. ### 2.6 Discussion of the State of the Art This section discusses the state of the art presented in the previous sections, in order to identify the different research gaps which are the focus of this work. **Physical thermo-mechanical models of machine tools** Most of the thermo-mechanical models of machine tools reviewed in Section 2.2 evaluate the response of the full system. Due to the structural complexity of machine tool assemblies, the evaluation of the full model is computationally time-consuming. Therefore, full thermo-mechanical models cannot be used as real-time state-observers or for applications requiring a large number of model evaluations. Some of the reviewed works (see [110, 117, 127]) only considered the thermal steady state for evaluating the thermally induced displacements. In most applications, the thermal loads are time-varying, e.g. enviEnvironmental temperature fluctuations, and therefore the different time constants of the parts of the machine tool assembly play an important role. Other authors (see [39, 83]) evaluated the transient response of the full machine tool assembly. However, the temperature distribution and the associated displacements of the TCP relative to the workpiece was simulated at only one single axis position due to the complexity of the system under consideration. Therefore, the effect of the position dependency on the temperature field was not considered. Weng et al. [127] computed the temperature distribution at one point of the working space and then evaluated the thermally induced displacements at different positions of the working space by a multibody simulation. This approach neglects that the machine tool compliance varies with the position of the machine tool axes, i.e. the stiffness matrix is not the same at different positions of the axes. Most of the reviewed publications used a commercial FEM software package to simulate the thermo-mechanical response of a machine tool to some specific internal or external influences. Ess [39] showed that having a dedicated simulation package increases the efficiency of the modeling workflow, providing some required macro models and dedicated analyses. **Model Order Reduction** As presented in section 2.4, there are several MOR techniques available in the literature. POD requires the evaluation of transient response of the full thermo-mechanical model, which is computationally expensive. The selection of the snapshots, i.e. training data for the reduction basis, is critical for the performance of the POD reduction approach. If the model needs to be evaluated for loads considerably different from the training set, the performance of POD decreases. Thus a re-computation of the reduction basis is required, which is a computationally demanding task. One of the main advantages of POD is that it is one of the most general MOR techniques, capable to deal with nonlinearities of the system. While this might be interesting to investigate the transient behavior of physical models with high nonlinearities (e.g. CFD models), POD does not take advantage of the system properties of thermo-mechanical models of machine tools. BT is MOR approach based on concepts from linear system theory and control. This method is mainly oriented to small and medium sized problems, which appear often in control systems. For full models with a large number of degrees of freedom, solving the large system of equations associated to Equation (2.11) is computationally too expensive. For large original systems, methods based on the low-rank approximation of the Gramians are available (see e.g. Kürschner [71]). However, the geometry of the thermo-mechanical models of machine tools normally leads to a complex FEM-discretization with a large number of degrees of freedom, which makes BT with low-rank approximated Gramians not tractable. The MM techniques and the numerical implementation via the Arnoldi method is robust and well established. MM techniques are suitably for large sized models and exploit the linearity of the systems under consideration. In physical models of machine tools, there is a frequency range of interest. An error estimator is needed to ensure that the error of the reduced model does not exceed a certain tolerance in the frequency range of interest. The error estimators available in the literature are computationally expensive, as they require evaluating the full system [14], creating several reduction bases [49], or computing the system Grammian [128]. Considering that the interest lies in a specific frequency range, IRKA methods computing an $H_2$ optimal reduction base are not required. These methods would lead to excessively large models in order to capture the response of the system at frequency ranges far from the region of interest. Thermo-mechanical models include a large amount of physical parameters, describing their thermal behavior. One of the main challenges of conventional MOR is that these physical parameters can no longer be changed after reduction. The reviewed parametric MOR techniques enable the possibility to modify the physical parameters after the system is reduced. These methods have the disadvantage that they require a computationally more expensive offline phase, associated with the creation of the projection basis. Additionally, the resulting reduced models have a larger number of degrees of freedom. The parametric reduction approaches can be divided into two groups. The first group of parametric MOR techniques reviewed [5, 13, 94] compute local bases for each of the parameter samples. There are applications that require a continuous parameter modification over a large parametric space, such as parameter identification in model validation. For these cases, a parametric MOR approach implies storing many different local reduction systems and interpolating between them, which is computationally inefficient. The main advantage of these methods is that the local subspaces have a smaller number of degrees of freedom compared to global bases. The second group [10, 17, 23, 100] constructs a global basis for the whole parametric space. The concatenation of the local bases relies on a previous sampling of the parameter space. This is not desirable during the model validation phase, where the range of the parametric dependence is not well established in advance. MOR methods based on the bilinearization of the system provide the flexibility of a continuous change of the system parameters without previous knowledge of the parameter range, as long as an affine representation of the parameter dependency is available. **Model Order Reduction in mechatronic systems** Several publications used MOR on thermo-mechanical models of machine tool structures [40, 54, 55, 72]. These works concentrated on simple machine tool kinematics, namely, a machine tool column with a headstock, and did not extend their approach to a full machine tool assembly. The authors tried several MOR reduction approaches, such as POD in Herzog et al. [54], MM in Galant et al. [40] or IRKA in Lang et al. [72]. The parameters describing the reduction were chosen by experience, requiring a great expert knowledge. The reviewed works focused on one specific parametric dependency, the thermal response of the system at different positions of the linear axes. Galant et al. [40] discretized the position of a headstock of a machine tool, considering as several heat inputs along the movement of the axes. For a smooth transition between different positions, a large number of discretized inputs is required, which increases considerably the size of the reduced system. Lang et al. [72] presented a parametric affine representation of the position of the linear axis. The authors split the guideways in several discrete contact regions according to the FE mesh. Then the local reduction bases for each of the positions were calculated by means of IRKA. The main disadvantage of this method is that the offline phase of the reduction is computationally expensive, as there is a large number of local reduction bases to compute. In addition, the mesh dependency of the parametric description is not desirable. Another important aspect is the coupling of the thermal model with the mechanical model. Galant et al. [40] calculated the mechanical response directly in the full model. This requires a high computational cost by inverting the stiffness matrix at every new position of the axes. Lang et al. [72] and Herzog et al. [54] considered the mechanical response as part of an output matrix $C$. However, these works did not consider that the stiffness matrix and thus the output matrix is also position dependent and needs to be considered in the parametric reduction as $C(p)$. The reviewed models concentrated so far on one parametric dependency, namely the position dependency of the machine tool axes. Several works (see Pavliček [96]) highlighted the importance of the convective heat exchange with the environment, which is described by the HTC. The parameters defining convection might change over time, due to variability of the surrounding environmental conditions. Therefore, parametric MOR is required in order to enable the traceability of the HTC after reduction. The reviewed publications do not use parametric MOR to trace the HTC of thermo-mechanical models of machine tools. 2.7 Research Gap From the reviewed state of the art in thermal error models, the research gaps can be identified and serve as a basis to set the objectives of this thesis. **Developing MOR approach for thermal models** A MOR technique needs to be introduced, especially suited for reduction of thermo-mechanical FE-models of machine tools. The MOR approach needs to consider that the amplitude of the thermally induced displacements decays at higher frequencies. Thus, only the information regarding a limited frequency bandwidth needs to be included for the reduction. The reduction technique needs to provide an *a priori* error estimator, so that the parameters describing the MOR can be chosen automatically according to some tolerances specified by the user. **Developing MOR approaches with varying boundary conditions conditions** The possibility of tracing several parameters of the thermal model of the machine tool after reduction enables the use of the model under varying loading conditions. The focus of this thesis lies on the traceability of the HTC describing the convective heat exchange with the environment or external fluid media. It needs to be studied how this parametric dependency is included in the system matrix in order to develop an efficient parametric MOR approach. **Coupling efficiently the thermal and the mechanical response** For thermal error models of machine tools, the output of interest is the displacement of the TCP relative to the workpiece. This requires the evaluation of the mechanical deformation due to the changes in the temperature field. Therefore, an efficient coupling between the thermal model and the mechanical models needs to be investigated. In order to characterize the mechanical response, the reduced mechanical model needs to include not only the thermal loads but also other static loads, such as gravity or preloads. **Developing MOR approaches with moving boundary conditions** Methods that describe the thermal response of the system at variable position of the axes need to be developed. A continuous, affine representation of the thermal moving boundary condition needs to be defined, depending exclusively on the geometry instead on the FE-mesh discretization. In addition to the traceability of the thermal contact, the mechanical model needs to take into account that the mechanical compliance varies at different positions of the TCP in the working volume. For one temperature distribution, a mechanical models needs to provide the volumetric position and orientation errors at different points of the working space. **Developing a software platform** The MOR reduction methods need to be implemented in a dedicated software platform. This enables an efficient workflow for developing thermo-mechanical models of machine tools. This implementation provides an efficient interface to integrate macro-models or specialized analyses useful to understand the thermal behavior of the system. 2.8 Outline of the thesis The present work is divided in four chapters. Chapter 3 concentrates on the development of MOR techniques of thermo-mechanical models. Section 3.1 describes the FE discretization of the heat transfer equations. This FE model constitutes the high fidelity model, which is the reference system for the surrogate model. Section 3.2 focuses on developing a MOR technique that approximates the high fidelity model. The parameters defining the developed MOR approach is chosen according to an a priori error estimator introduced in Section 3.3. Finally, Section 3.4 describes the efficient coupling between the thermal system and the mechanical system, considering other quasi-static effects, such as gravity or static loads. Chapter 4 presents the MOR approaches for the modification of the parameters describing the convective boundary conditions. Section 4.1 describes the thermal interfaces and introduces the bushing interfaces in the context of MOR. Section 4.2 presents an approach to evaluate the thermal response at different positions of the axes with reduced models. Section 4.3 focuses on the development of MOR that handle the parametric dependency of the convective boundary conditions. After introducing the concept of distributed interfaces, MOR reduction approaches based on global and local basis are presented. Chapter 5 presents the developed simulation environment, MORe. This simulation platform integrates all the methods presented in the previous chapters. MORe is designed to enable an efficient model development and includes analysis tools for the evaluation of the behavior of machine tools. Chapter 6 focuses on two study cases of efficient thermo-mechanical models of 5-axis machine tools. The first example concentrates on the thermal response of the machine tool to the fluctuations of the environmental temperature. The second example studies the thermal behavior of a machine tool to internal heat sources and cutting fluid influences. Chapter 7 presents the main conclusions of this thesis and outlines the future work. Model order reduction of thermo-mechanical models This chapter describes the MOR methods for thermo-mechanical models. Section 3.1 presents the mathematical description of thermal models, namely the FE discretization of the heat transfer equations. Section 3.2 introduces the Krylov Modal Subspace (KMS) reduction method applied to thermal models. The chapter continues in Section 3.3 with the derivation of an a priori error estimator for KMS reduction. The error estimator ensures before reduction that the error between the original and the reduced system is bounded in the frequency range of interest. Finally, Section 3.4 presents an efficient coupling method between the thermal and the mechanical system, in order to enable the evaluation of thermo-mechanical displacements. 3.1 FEM discretization of the heat transfer equation Physical models of mechatronic systems describe the temperature distribution on the structure. The temperature distribution is a continuous function $T(t, z)$ for each point $z$ in the domain $\Omega$ and at each time. The heat transfer equation, based on the energy conservation principle, describes the temporal and spatial evolution of the temperature field. A PDE describes heat transfer as $$\rho c_p \dot{T}(t, z) - \text{div}(\lambda T(t, z)) = 0$$ \hspace{1cm} (3.1) where $c_p$ is the specific heat capacity, $\rho$ is the material density, and $\lambda$ is the thermal conductivity. In order to have a unique solution of the PDE, boundary and initial conditions need to be defined. The Neumann boundary conditions can be interpreted as a heat flux applied to a surface $\Gamma_1$. Neumann boundary condition are defined as $$\lambda \frac{\partial T(t, z)}{\partial n} = \dot{q}(t, z)$$ \hspace{1cm} (3.2) where $\dot{q}(t, z)$ is the heat flux applied on the boundary $\Gamma_1$. Convection is represented by Robin boundary conditions. The Robin boundary conditions are defined as $$\lambda \frac{\partial T(t, z)}{\partial n} - h(t, z)(T(t, z) - T_{ext}(t, z)) = 0$$ (3.3) where $h(t, z)$ is the heat transfer coefficient and $T_{ext}(t, z)$ is the external temperature acting on $\Gamma_2$. The Robin boundary condition represents the convective heat exchange between an external fluid media and the structure or the contact condition between two parts. Finally, the initial condition can be stated as $$T(z, 0) = T_0(z)$$ (3.4) over the whole domain $\Omega$. The PDE equations presented have an analytical solution for some simple geometries and simplifying assumptions. Carslaw and Jaeger [30] summarized the available analytical solutions of the heat transfer equations in solids. For general complex domains, the PDE cannot be solved analytically. Numerical methods, such as FE or finite differences, can provide a solution. This work opts for a FE-discretization of the heat equations, as FE methods are widely accepted and established. Bathe [11] provides a comprehensive review of FE methods. There are many open source and commercial FE software packages available, which are essential to computer aided engineering (CAE). The FE method approximates the temperature field of any point $z$ inside a domain $\Omega^e$, i.e. $z \in \Omega^e$, by means of the values of the temperature at some discrete points $\theta^e$. The domain $\Omega^e$ is called element $e$ and the discrete points are nodes of the element. The shape functions define the interpolation of the temperature at any point of the element $e$ from the nodal values as $$T(t, z) = n_e^T(z)\theta^e(t)$$ (3.5) where $n_e(z)$ is the vector with the values of the shape functions of the element at $z$ and $\theta^e(t)$ are the temperatures of the nodes of the element. Applying the principle of virtual work, the weak form of the PDE can be obtained and system of ODE for the element $e$ can be expressed as $$C_{th}^e \dot{\theta}^e(t) + K_{cond}^e \theta^e(t) + K_{conv}^e(t) \theta^e(t) = q_{ext}^e$$ (3.6) where $C_{th}^e$ is the thermal capacity matrix, $K_{cond}^e$ is the thermal conductivity matrix, $K_{conv}^e$ is the thermal convection matrix, and $q_{ext}^e$ is the thermal heat input vector. These matrices are defined as $$C_{th}^e = \int_{\Omega^e} n_e c_p \rho n_e^T dz$$ (3.7) $$K_{cond}^e = \int_{\Omega^e} B_e^T \lambda B_e dz$$ (3.8) $$K_{conv}^e(t) = \int_{\Gamma_2^e} n_e h(t, z) n_e^T dz$$ (3.9) $$q_{ext}^e = \int_{\Gamma_1^e} n_e (\dot{q}(t, z) + h(t, z) T_{ext}(t, z)) dz$$ (3.10) where \( B_e \) is the spatial derivative of the shape function \( n_e \). Assembling for all the elements of the FE-mesh, the ODE for the whole system can be obtained as \[ C_{th} \dot{\theta}(t) + K_{cond}(t)\theta(t) + K_{conv}(t)\theta(t) = q_{ext} \] (3.11) Equation (3.11) shows that the thermal system a first order system of ODE. In order to analyze the properties of this system, it is useful to derive its state space representation as \[ E \dot{x}(t) = Ax(t) + Bu(t) \] (3.12) where the mass matrix is \( E = C_{th} \), the system matrix is \( A = -K_{cond} - K_{conv} \), the input matrix is \( B \), and the input vector is \( u \). In addition, temperature output of the system can be defined as \[ y_{th}(t) = C_{therm}x(t) \] (3.13) where \( y_{th}(t) \) is the output temperature vector and \( C_{therm} \) is the output matrix. This chapter only considers that the system is LTI, i.e. the system matrices are constant over time. This is satisfied as long as the physical parameters, such as heat transfer coefficient \( h(t, z) \), stay constant over time. The invariant property allows the derivation of the transfer function. The transfer function describes the relationship between the inputs and outputs in frequency domain of the systems of Equation (3.12) and (3.13). Applying the Laplace transform to Equation (3.12) and (3.13) leads to \[ y_{th}(s) = H(s)u(s) = C_{therm}(Es - A)^{-1}Bu(s) \] (3.14) where \( H(s) \) is a multiple input multiple output (MIMO) transfer function. The matrices forming the thermal Equation (3.11) are symmetric real matrices and positive semi-definite. A matrix \( M \) is positive semi-definite if it satisfies that \[ x^T M x \geq 0 \quad \forall x \in \mathbb{R}^n \land x \neq 0 \] (3.15) The semi-definiteness of the matrices conforming the thermal system implies that all the eigenvalues of Equation (3.11) are real and non-negative. The state space representation of the system in Equation (3.12) is thus negative semi-definite, with all real and non-positive eigenvalues. ### 3.2 Krylov and modal subspace reduction of thermal models The geometrical complexity of the mechatronic systems results in thermal models a large number of degrees of freedom (DOF). The dimension of Equation (3.12) is typically in the order of magnitude of \( 10^6 \). Therefore, developing computationally efficient surrogate models is needed. This enables applications that require large number of model evaluations or real time capabilities. Among the different surrogate modeling techniques, this work concentrates on projection based MOR applied to thermal models of mechatronic systems. The reduced model needs to satisfy the following requirements: - Matching the steady state response • Approximating the dynamics of the thermal system in the frequency range of interest In order to satisfy both requirements, a combination of two MOR techniques is proposed. On one hand, MM Krylov Subspace methods approximate the response of the system around an expansion point $s_e$. The expansion point can be placed at a low frequency close to zero in order to match the steady state response of the system. On the other hand, several modes of the thermal system can be included in the projection basis. Thus, the reduced system considers the thermal dynamics up to a certain frequency. This combination leads to the proposed reduction technique, Krylov Modal Subspace Reduction (KMS). Spezza [113, 114] introduced the KMS reduction technique for second order systems with application in structural dynamics. Let $\mathcal{V}_k \subset \mathbb{R}^n$ be the Krylov subspace with one expansion point $s_e$, such that $$\mathcal{V}_k = \text{span}((s_e E - A)^{-1} B)$$ \hspace{1cm} (3.16) An orthonormal basis $V_k$ of $\mathcal{V}_k$ can be constructed, such that $\mathcal{V}_k = \text{span}(V_k)$. The original Equation (3.12) can be projected into the subspace $\mathcal{V}_k$. The reduced system matches the response of the system around the expansion point $s_e$. If the expansion point, $s_e$, is close to 0 and one iteration is included in the Krylov subspace, the reduced system matches the steady state response. Let $\mathcal{V}_\mu \subset \mathbb{R}^n$ be the truncated modal subspace. The subspace $\mathcal{V}_\mu$ is the span of the first $\mu$ eigenvectors of Equation (3.12), defined as $$\mathcal{V}_\mu = \text{span}\left(\begin{bmatrix} \phi_1 & \phi_2 & \ldots & \phi_\mu \end{bmatrix}\right)$$ \hspace{1cm} (3.17) where $\phi_i$ is an eigenvector associated to the eigenvalue $\alpha_i$ such that $(s_e E - A)\phi_i = \alpha_i \phi_i$. The system matrices $A$ and $E$, defined in Equation (3.11), (3.9), (3.10), and (3.12), are negative semi-definite, resulting in all real non-positive eigenvalues. Therefore, the eigenvectors form an orthonormal basis $V_\mu$ of $\mathcal{V}_\mu$, such that $\mathcal{V}_\mu = \text{span}(V_\mu)$. The original Equation (3.12) can be projected into the subspace $\mathcal{V}_\mu$. The reduced system approximates the dynamic thermal behavior in a certain frequency range. The KMS reduction projects Equation (3.12) by means of a projection matrix $V$. The basis $V$ spans linear subspace $\mathcal{V} \subset \mathbb{R}^n$, i.e. $\mathcal{V} = \text{span}(V)$, such that $$\mathcal{V} = \mathcal{V}_\mu + \mathcal{V}_k = \{\mathbf{x} \in \mathbb{R}^n \mid \exists \mathbf{v}_1 \in \mathcal{V}_\mu \ \mathbf{v}_2 \in \mathcal{V}_k \ \mathbf{x} = \mathbf{v}_1 + \mathbf{v}_2\}$$ \hspace{1cm} (3.18) The subspace $\mathcal{V}$ captures the information about both the steady state response and the thermal transient behavior of the system. Therefore, this projection basis satisfies the requirement for accurate approximation of the thermal behavior of mechatronic systems. The Algorithm 1 summarizes the numerical implementation of the KMS reduction method. Appendix A provides a detailed description of the algorithmic implementation of the numerical methods required in Algorithm 1. This work considers one-sided projection, i.e. $W = V$. As explained by Antoulas [7], the main advantage of one-sided projection is that it ensures the preservation of the stability after projection. One-sided projection is satisfied if the inputs of the system are the same as the outputs, i.e. $B = C^T$. **Numerical example: First order random system** The reduction method can be illustrated by means of a numerical example of a first order system with randomly allocated poles. A single input single output (SISO) system of dimension $n$ can be defined, such that $A = -\text{diag}(\omega_i)$, $E = I$, and $b^T = c = \begin{bmatrix} 1 & \ldots & 1 \end{bmatrix}$. The system is already in modal coordinates, facilitating the evaluation of the FRF $h(\omega_j)$ as 3.2 Krylov and modal subspace reduction of thermal models Algorithm 1 Krylov Modal Subspace Reduction 1: procedure KMS(A, E, B, ωm, nguess, nmax, se, me) 2: A, E, B ▷ System matrices 3: ωm ▷ Maximum considered eigenfrequency 4: nmax, nguess ▷ Maximum number of modes and guessed number of modes below ωm 5: s0, me ▷ Expansion point and number of moments 6: Vk = BLOCKARNOLDI(A,E, B, se, me) ▷ See Algorithm 5 7: Φ, ω = MODAL(A, E, ωm, nguess, nmax, se) ▷ See Algorithm 6 8: V = ORTH(Vk, Φ ) ▷ See Algorithm 7 9: return V \[ h(\omega j) = \sum_{i=1}^{n} \frac{1}{\omega j + \omega_i} \] (3.19) The system can be reduced by means of KMS and the transfer function of the system reduced \( \tilde{h}(\omega j) \) can be evaluated. For this system \( \tilde{h}(\omega j) \) has an analytical expression as \[ \tilde{h}(\omega j) = \sum_{i=1}^{\mu} \frac{1}{\omega j + \omega_i} + \frac{\left( \sum_{i=\nu+1}^{\nu} \frac{1}{\omega_i + s_e} \right)^2}{\omega j \sum_{i=1}^{\nu} \frac{1}{(\omega_i + s_e)^2} + \sum_{i=1}^{\nu} \frac{\omega_i}{(\omega_i + s_e)^2}} \] (3.20) where \( \mu \) is the number of eigenfrequencies below \( \omega_m \), and \( \nu \) are the remaining eigenfrequencies, such that \( \nu = n - \mu \). For the numerical example, the following values are chosen: - The dimension of the system is \( n = 100 \) - The values of \( \omega_i \) are logarithmically uniformly distributed between \( 10^{-2} \) and \( 10^2 \) rad/s - The maximum frequency considered for the modal part is \( \omega_m = 5 \) rad/s For the selected numerical values, Figure 3.1 shows the transfer function of the original and reduced system. The reduced and original system match for the frequency range of interest, namely \( \omega < \omega_{max} \). The frequency range of interest, \( \omega_{max} \), is different than the maximum frequency considered, \( \omega_m \). In fact, \( \omega_{max} < \omega_m \). In order to study the error introduced by the reduction method, an appropriate error definition needs to be provided. The relative error between the original and reduced system can be expressed as \[ e(\omega j) = \frac{h(\omega j) - \tilde{h}(\omega j)}{h(\omega j)} \] (3.21) For the numerical example, Figure 3.2 displays the error \( e(\omega j) \). As already observed in the comparison of the frequency response of Figure 3.1, the relative error is small for the frequency region of interest. The error increases with \( \omega \) until it reaches a maximum value at high frequencies. This motivates the need of providing a theoretical bound of the error of reduction, which is addressed in Section 3.3. Numerical example: thermal FE model In order to illustrate the KMS reduction approach, a thermal FE-model is presented in this section. The case study is the table of the RMT machine tool, shown in Figure 1.1. The FE discretization of the part under consideration is depicted in Figure 3.3. The thermal model is meshed with tetrahedral elements, leading to an original system of 4157 DOF. The relative small dimension of the full FE model allows directly the evaluation of the response of the original system in order to compare it to the response of the reduced system. The machine tool table under consideration is exposed to the following thermal loads: - Convection to the environmental temperature - Heat dissipated by the linear drive The thermal response of the original system is compared to the response of the reduced system by means of KMS reduction. The expansion point for the Krylov subspace reduction is chosen at $s_e = 10^{-8}$ rad/s. An expansion point at a low frequency provides a good matching of the steady state response. The number of thermal inputs to this system is 2, therefore the Krylov part of the reduction provides a basis $V_k$ of dimension 2. In addition to the steady state response, the transient part of the response is considered including thermal modes of the system, creating the modal basis $V_m$. For the current study case, 97 modes are included, being the maximum frequency $\omega_{m} < 0.044$ rad/s. Section 3.3 focuses on the selection of the number of modes considered in the KMS reduction. The combination of the Krylov and modal part creates the projection basis $V$, leading to a reduced system of dimension 99. The thermal modes of the system are included in the projection basis. Figure 3.4 illustrates the shapes and eigenfrequencies of the first 6 thermal modes of the system. Similarly to the previous numerical example, the relative error between the response of the original and reduced system can be evaluated. The input of the frequency response is the heat flux at the linear drive and the temperature output is measured at the same location. Figure 3.5 shows the relative error between the original and reduced system. The FE thermal model presents a similar behavior to the previous numerical example. At low frequencies, the thermal error is negligibly small and it increases at higher frequencies. Figure 3.2: Relative error between the original and reduced system described in Equation (3.19) and (3.20) respectively Figure 3.3: FE mesh of the thermal model of the table of the RMT introduced in Figure 1.1 3.3 Error estimation The reduced system needs to represent accurately the thermal response of the system in the frequency range of interest, i.e. for all $\omega \in [0, \omega_{max}]$. The upper bound of the frequency range of interest, $\omega_{max}$, determines how many eigenvectors, $\mu$, are to be included in the KMS reduction basis. Therefore, this work proposes an a-priori error estimator for the KMS method that relates the maximum eigenfrequency, $\omega_m = |\alpha_m|$, of the KMS basis with the frequency range of interest. The error estimator presented in this section considers that the input and output matrix of Equation (3.12) and (3.13) are the same, i.e. $B = C^T_{therm}$, leading to one-sided projection. For the derivation of the error estimator of the KMS method, the Krylov subspace defined in Equation (3.16) has an expansion point $s_e \in \mathbb{R}$ close to zero and a single iteration. Before introducing the error estimator, some definitions and preliminary results are introduced. Firstly, a suitable error definition is required. Different error values are proposed in the literature, as summarized by Benner et al. [18]. Let $E(j\omega)$ be the absolute reduction error frequency response function (FRF) defined for each frequency $\omega$ as $$E(j\omega) = H(j\omega) - \tilde{H}(j\omega)$$ \hspace{1cm} (3.22) where $H(j\omega)$ and $\tilde{H}(j\omega)$ are the FRF of the original and reduced system respectively. $E(j\omega)$ is a matrix of dimension $p$ (number of outputs) by $m$ (number of inputs). Let $e_{ij}(j\omega)$ be the relative error for the $i$th input and $j$th output combination as $$e_{ij}(j\omega) = \frac{h_{ij}(j\omega) - \tilde{h}_{ij}(j\omega)}{\tilde{h}_{ij}(j\omega)}$$ \hspace{1cm} (3.23) where $h_{ij}(j\omega)$ is the element in $i$th row and $j$th column of $\boldsymbol{H}(j\omega)$, and $\tilde{h}_{ij}(j\omega)$ is the element in $i$th row and $j$th column of $\tilde{\boldsymbol{H}}(j\omega)$. Secondly, some remarks about the subspaces associated to the KMS $\mathcal{V}$ are required. The subspace $\mathcal{V}_\nu \subset \mathbb{R}^n$ can be defined as the subspace of the remaining modes not included in $\mathcal{V}_\mu$, i.e. $$\mathcal{V}_\nu = \text{span}\left[\begin{bmatrix} \phi_{\mu+1} & \phi_{\mu+2} & \ldots & \phi_n \end{bmatrix}\right]$$ \hspace{1cm} (3.24) Due to the properties of the system matrix, the subspace $\mathcal{V}_\mu$ is the orthogonal complement of $\mathcal{V}_\nu$, such that $\mathbb{R}^n = \mathcal{V}_\mu \oplus \mathcal{V}_\nu$. Let $\Phi$ be a matrix whose columns are the eigenvectors $\phi_i$ of Equation (3.12), normalized to the capacity matrix such that $\Phi^T E \Phi = I$. Equation (3.12) and (3.13) can be expressed in modal coordinates $\bar{x} = \Phi x_m$ as $$I \dot{\bar{x}}_m(t) = \Omega \bar{x}_m(t) + \Phi^T B u(t) = \Omega \bar{x}_m(t) + B_m u(t)$$ \hspace{1cm} (3.25) $$y_{th}(t) = C_{therm} \Phi \bar{x}_m(t) = C_m \bar{x}_m(t)$$ \hspace{1cm} (3.26) where $\Omega = \text{diag}(\alpha_1, \ldots, \alpha_n)$ is a diagonal matrix with the $n$ eigenvalues $\alpha_k$ of the system. The system matrix can be expressed as block matrix, such that $$\begin{bmatrix} I_\mu & 0 \\ 0 & I_\nu \end{bmatrix} \begin{bmatrix} \dot{\bar{x}}_\mu \\ \dot{\bar{x}}_\nu \end{bmatrix} = \begin{bmatrix} \Omega_\mu & 0 \\ 0 & \Omega_\nu \end{bmatrix} \begin{bmatrix} \bar{x}_\mu \\ \bar{x}_\nu \end{bmatrix} + \begin{bmatrix} B_\mu \\ B_\nu \end{bmatrix} u(t)$$ \hspace{1cm} (3.27) $$y_{th} = \begin{bmatrix} C_\mu & C_\nu \end{bmatrix} \begin{bmatrix} \bar{x}_\mu \\ \bar{x}_\nu \end{bmatrix}$$ \hspace{1cm} (3.28) Let $\mathcal{V}_{\nu k} \subset \mathbb{R}^n$ be the Krylov subspace with one moment around the expansion point $s_e \in \mathbb{R}$ of the original system projected into the subspace $\mathcal{V}_\nu$, such that $$\mathcal{V}_{\nu k} = \text{span}((s_e I_\nu - \Omega_\nu)^{-1} B_\nu)$$ \hspace{1cm} (3.29) The definition of $\mathcal{V}_{\nu k}$ leads to a preliminary result expressed in Fact 1. **Fact 1.** The KMS $\mathcal{V}$ is the direct sum of $\mathcal{V}_\mu$ and $\mathcal{V}_{\nu k}$. $$\mathcal{V} = \mathcal{V}_\mu \oplus \mathcal{V}_{\nu k}$$ \hspace{1cm} (3.30) **Proof.** Equation (3.25) expresses the LTI system in modal coordinates. The Krylov subspace of the system expressed in modal coordinates is $$\mathcal{V}_k = \text{span}((s_e I - \Omega)^{-1} B_m)$$ \hspace{1cm} (3.31) As $s_e \mathbf{I} + \mathbf{\Omega}$ is a diagonal matrix, the Krylov subspace of the original system can be expressed as $$\mathcal{V}_k = \mathcal{V}_{\mu k} + \mathcal{V}_{\nu k} = \text{span}((s_e \mathbf{I}_\mu - \mathbf{\Omega}_\mu)^{-1} \mathbf{B}_\mu) + \text{span}((s_e \mathbf{I}_\nu - \mathbf{\Omega}_\nu)^{-1} \mathbf{B}_\nu)$$ \hspace{1cm} (3.32) The $\mathcal{V}_{\mu k}$ is a subspace of $\mathcal{V}_\mu$, i.e. $\mathcal{V}_{\mu k} \subset \mathcal{V}_\mu$. Therefore, $\mathcal{V} = \mathcal{V}_k + \mathcal{V}_\mu = \mathcal{V}_{\mu k} + \mathcal{V}_{\nu k} + \mathcal{V}_\mu = \mathcal{V}_{\nu k} + \mathcal{V}_\mu$. In addition, $\mathcal{V}_{\nu k}$ is also a subspace of $\mathcal{V}_\nu$, i.e. $\mathcal{V}_{\nu k} \subset \mathcal{V}_\nu$. Since $\mathcal{V}_\nu \cap \mathcal{V}_\mu = \emptyset$, it follows that $\mathcal{V}_{\nu k} \cap \mathcal{V}_\mu = \emptyset$. This implies that $\mathcal{V}_{\nu k}$ is the orthogonal complement of $\mathcal{V}_\mu$, i.e. $\mathcal{V} = \mathcal{V}_{\nu k} \oplus \mathcal{V}_\mu$ Fact 1 relates the subspaces $\mathcal{V}$, $\mathcal{V}_\mu$, and $\mathcal{V}_{\nu k}$, stating that any vector in $\mathcal{V}$ can be decomposed uniquely into two components, one in $\mathcal{V}_\mu$ and another one in $\mathcal{V}_{\nu k}$. This property is useful to separate the error of the reduced system, as shown in Fact 2. **Fact 2.** The error $e_{ij}(j\omega)$ of Equation (3.23) can be expressed as the product of two terms $$e_{ij}(j\omega) = -e_{\mu ij}(j\omega)e_{\nu k_{ij}}(j\omega)$$ \hspace{1cm} (3.33) being $e_{\mu ij}(j\omega)$ and $e_{\nu k_{ij}}(j\omega)$ defined as $$e_{\mu ij}(j\omega) = \frac{h_{ij}(j\omega) - \tilde{h}_{\mu ij}(j\omega)}{h_{ij}(j\omega)}$$ \hspace{1cm} (3.34) $$e_{\nu k_{ij}}(j\omega) = \frac{\tilde{h}_{\nu k_{ij}}(j\omega) - \tilde{h}_{\nu ij}(j\omega)}{\tilde{h}_{\nu ij}(j\omega)}$$ \hspace{1cm} (3.35) where $\tilde{h}_{\mu ij}(j\omega)$ is the element in $i$th row and $j$th column of the FRF of the system projected into $\mathcal{V}_\mu$, $\tilde{h}_{\nu ij}(j\omega)$ is the element in $i$th row and $j$th column of the FRF of the system projected into $\mathcal{V}_\nu$, and $\tilde{h}_{\nu k_{ij}}(j\omega)$ is the element in $i$th row and $j$th column of the FRF of the system projected into $\mathcal{V}_{\nu k}$. **Proof.** The Fact 1 enables to express the FRF of the reduced system as $\tilde{h}_{ij}(j\omega) = \tilde{h}_{\mu ij}(j\omega) + \tilde{h}_{\nu k_{ij}}(j\omega)$. Additionally, the transfer function of the original system can be expressed as $h_{ij}(j\omega) = \tilde{h}_{\mu ij}(j\omega) + \tilde{h}_{\nu ij}(j\omega)$. Substituting in the error definition $$e_{ij}(j\omega) = \frac{\tilde{h}_{\mu ij}(j\omega) + \tilde{h}_{\nu ij}(j\omega) - \tilde{h}_{\mu ij}(j\omega) - \tilde{h}_{\nu k_{ij}}(j\omega)}{h_{ij}(j\omega)}$$ \hspace{1cm} (3.36) Multiplying the previous expression by $\frac{\tilde{h}_{\nu ij}(j\omega)}{\tilde{h}_{\nu ij}(j\omega)}$, the following is obtained $$e_{ij}(j\omega) = \frac{\tilde{h}_{\nu ij}(j\omega) - \tilde{h}_{\nu k_{ij}}(j\omega)}{h_{ij}(j\omega)} \cdot \frac{\tilde{h}_{\nu ij}(j\omega)}{\tilde{h}_{\mu ij}(j\omega)}$$ \hspace{1cm} (3.37) Substituting $\tilde{h}_{\nu ij}(j\omega)$ in the numerator by $h_{ij}(j\omega) - \tilde{h}_{\mu ij}(j\omega)$ and reorganizing the terms, the error separation of Equation (3.33) is obtained as $$e_{ij}(j\omega) = -\frac{\tilde{h}_{\nu k_{ij}}(j\omega) - \tilde{h}_{\nu ij}(j\omega)}{\tilde{h}_{\nu ij}(j\omega)} \cdot \frac{h_{ij}(j\omega) - \tilde{h}_{\mu ij}(j\omega)}{h_{ij}(j\omega)}$$ \hspace{1cm} (3.38) The result of Fact 2 enables the separation of the error in two terms. The goal is to find a bound for the error $e_{ij}(j\omega)$. The first step for an error bound of the KMS method is to determine an upper bound for the term $e_{\nu k_{ij}}(j\omega)$. The following theorem shows a theoretical error bound for all frequencies of the term $e_{ijk_{ij}}(j\omega)$, provided that the input and the output are the same, namely $i = j$. **Theorem 1.** The magnitude of the error $e_{\nu k_{ij}}(j\omega)$ defined in Equation (3.33) is bounded by $e_{est}(j\omega)$ defined as $$|e_{est}(j\omega)| = \frac{\omega^2 + s_e^2}{\omega^2 + w_m^2} > |e_{\nu k_{ij}}(j\omega)|$$ for all $\omega \in [0, \infty]$ such that $\omega \in \mathbb{R}$ given that $\omega_m < \omega_{m\nu+1}$ and that the input and the output are the same, i.e. $i = j$. **Proof.** The FRF of the system projected into $\mathcal{V}_\nu$, is $$\tilde{h}_{\nu k_{ij}}(j\omega) = \sum_{k=1}^\nu \frac{b_k^2}{(j\omega + \omega_k)}$$ where $\nu = n - \mu$ is the dimension of the subspace $\mathcal{V}_\nu$, $\omega_k$ are the absolute value of the eigenvalues of the system, and $b_k$ corresponds to the $k$th element of the $i$th column of $B_\nu$. The subspace $\mathcal{V}_{\nu k}$ can be defined according to Equation (3.29) as range($v$) where $$v^T = \begin{bmatrix} b_1 & b_2 & \cdots & b_k & \cdots & b_\nu \\ \omega_1 + s_e & \omega_2 + s_e & \cdots & \omega_k + s_e & \cdots & \omega_\nu + s_e \end{bmatrix}$$ Projecting Equation (3.40) into $\mathcal{V}_{\nu k}$, the following system is obtained $$\tilde{e}\ddot{x} + \tilde{a}\dot{x} = \tilde{b}u(t)$$ where $\tilde{e}$, $\tilde{a}$, and $\tilde{b}$ take the following values $$\tilde{a} = \sum_{k=1}^\nu \frac{b_k^2}{(\omega_k + s_e)^2} \omega_k$$ $$\tilde{e} = \sum_{k=1}^\nu \frac{b_k^2}{(\omega_k + s_e)^2}$$ $$\tilde{b} = \sum_{k=1}^\nu \frac{b_k^2}{\omega_k + s_e} = \tilde{c}$$ The transfer function of the reduced system $\tilde{h}_{\nu k}$ is $$\tilde{h}_{\nu k_{ij}}(j\omega) = \frac{\tilde{b}^2}{j\omega \tilde{e} + \tilde{a}} = \frac{\left(\sum_{k=1}^\nu \frac{b_k^2}{\omega_k + s_e}\right)^2}{j\omega \sum_{k=1}^\nu \frac{b_k^2}{(\omega_k + s_e)^2} + \sum_{k=1}^\nu \frac{b_k^2}{(\omega_k + s_e)^2} \omega_k}$$ Firstly, some properties of the error $e_{\nu k_{ij}}(j\omega)$ need to be discussed. According to Equation (3.35), the poles of the transfer function of the real error, $e_{\nu k_{ij}}(s)$ with $s \in \mathbb{C}$, are the poles of $\tilde{h}_{\nu k_{ij}}(s)$, the poles $\tilde{h}_{\nu i}(s)$, and the zeros of $\tilde{h}_{\nu i}(s)$. Given that the system matrices are positive semi-definite, all the poles Figure 3.6: Magnitude of the error of the estimator $e_{est}(j\omega)$ of Equation (3.39) compared to theoretically possible magnitude of the error $e_{\nu k_1}(j\omega)$ defined in Equation (3.35) of $\tilde{h}_{\nu k_1}(s)$ are real. Additionally, the poles of $\tilde{h}_{\nu k_1}(s)$ are real, as it can be seen from Equation (3.43). Furthermore, the zeros of the transfer function $\tilde{h}_{\nu k_1}(s)$ are real numbers. This fact can be shown by contradiction. Let $s = \alpha \pm j\beta$ be zeros of the transfer function $\tilde{h}_{\nu k_1}(s)$, such that $\beta > 0$. The zero of the transfer function needs to satisfy that $$\tilde{h}_{\nu k_1}(\alpha + j\beta) = \sum_{k=1}^{\nu} \frac{b_k^2}{(\alpha + j\beta + \omega_k)} = \sum_{k=1}^{\nu} \frac{b_k^2}{((\alpha + \omega_k)^2 + \beta^2)} (\alpha + \omega_k - j\beta) = 0$$ (3.45) Given that $\beta > 0$ and $\frac{b_k^2}{((\alpha + \omega_k)^2 + \beta^2)} \geq 0$, the transfer function is only zero for $\beta = 0$, reaching a contradiction. Therefore, the zeros of transfer function $\tilde{h}_{\nu k_1}(s)$ are real. Thus, the poles of $e_{\nu k_1}(j\omega)$ are all real. This leads to a smooth FRF response, without any resonance frequency. This property is relevant for the derivation of the error bound. In order to proof that the proposed estimator of Equation (3.39) bounds the error for all frequencies, several counterexamples are created, which are illustrated in Figure 3.6. Figure 3.6a shows the case where the magnitude of the error, $e_{\nu k_1}(j\omega)$, is higher than the magnitude of the estimator, $e_{est}(j\omega)$, at high frequencies. Figure 3.6b depicts the case where the poles of the error are at lower frequencies than the poles of the estimator. The third counterexample in Figure 3.6c illustrates the case where the slope of the error is lower than the slope of the estimator. The slope of the error is related with the number of zeros at low frequency. The error estimator has two zeros at the expansion point $s_e$. Thus, it needs to be shown that the actual error at least two zeros at the expansion point, which is placed close to zero. Figure 3.6d shows graphically that if the other three counterexamples are not true, the error estimator is an upper bound of the FRF of the real error for the whole frequency range. Therefore, it needs to be proven that the counterexamples of Figure 3.6 are not possible following the next steps: 1. \( \lim_{\omega \to \infty} |e_{est}(j\omega)| > \lim_{\omega \to \infty} |e_{\nu k_i}(j\omega)| \) 2. All the poles of the actual error \( e_{\nu k_i}(j\omega) \) are at a higher frequency than the ones of the error estimator \( e_{est}(j\omega) \) 3. \( e_{\nu k_i}(j\omega) \) has at least two zeros at \( \omega \) close to zero The first condition refers to the magnitude of the FRF at infinity, which is associated to the Markov parameters. From the definition of the error estimator \( \lim_{\omega \to \infty} |e_{est}(j\omega)| = 1 \). The magnitude of \( e_{\nu k_i}(j\omega) \) at infinity can be expressed as \[ \lim_{\omega \to \infty} |e_{\nu k_i}(j\omega)| = \lim_{\omega \to \infty} \left| \frac{\tilde{h}_{\nu k_i}(j\omega)}{\tilde{h}_{\nu i}(j\omega)} - 1 \right| = \left| \lim_{\omega \to \infty} \frac{\tilde{h}_{\nu k_i}(j\omega)}{\tilde{h}_{\nu i}(j\omega)} - 1 \right| < 1 \tag{3.46} \] In order to evaluate the limit of the error, the limit \( \lim_{\omega \to \infty} \frac{\tilde{h}_{\nu k_i}(j\omega)}{\tilde{h}_{\nu i}(j\omega)} \) needs to be evaluated. \[ \lim_{\omega \to \infty} \frac{\tilde{h}_{\nu k_i}(j\omega)}{\tilde{h}_{\nu i}(j\omega)} = \frac{\frac{1}{j\omega} \frac{\tilde{b}^2}{\tilde{c}}}{\frac{1}{j\omega} \sum_{k=1}^{\nu} b_k^2} = \frac{\tilde{b}^2}{\tilde{c}} \frac{1}{\sum_{k=1}^{\nu} b_k^2} \tag{3.47} \] where \( \frac{\tilde{b}^2}{\tilde{c}} \) can be expressed according to Equation (3.43) as \[ \frac{\tilde{b}^2}{\tilde{c}} = \frac{\left( \sum_{k=1}^{\nu} \frac{b_k^2}{\omega_k + s_e} \right)^2}{\sum_{k=1}^{\nu} \frac{b_k^2}{(\omega_k + s_e)^2}} \tag{3.48} \] Equation (3.47) shows that the limit \( \lim_{\omega \to \infty} \frac{\tilde{h}_{\nu k_i}(j\omega)}{\tilde{h}_{\nu i}(j\omega)} \) is a positive real number. Therefore, the condition on the magnitude of the error of Equation (3.46) can be expressed as \[ 0 < \lim_{\omega \to \infty} \frac{\tilde{h}_{\nu k_i}(j\omega)}{\tilde{h}_{\nu i}(j\omega)} < 2 \tag{3.49} \] In fact, an upper bound for the term \( \frac{\tilde{b}^2}{\tilde{c}} \) can be found as follows \[ \frac{\tilde{b}^2}{\tilde{c}} < \frac{\left( \sum_{k=1}^{\nu} \frac{b_k^2}{\omega_k + s_e} \right)^2}{\frac{1}{\omega_1 + s_e} \sum_{k=1}^{\nu} \frac{b_k^2}{\omega_k + s_e}} = \sum_{k=1}^{\nu} \frac{\omega_1 + s_e}{\omega_k + s_e} b_k^2 < \sum_{k=1}^{\nu} b_k^2 \tag{3.50} \] considering that \( \omega_1 \leq \omega_k \) for all \( k \in [1, \nu] \). The upper bound for the term \( \frac{\tilde{b}^2}{\tilde{c}} \) implies that the condition of Equation (3.49) is satisfied. Therefore, the magnitude of the actual error is bounded by 1, as stated in Equation (3.47). This completes the first step of the proof, showing that the magnitude of the error estimator is an upper bound of the magnitude of the actual error, \( e_{\nu k_i}(j\omega) \), at infinity. The second step in this proof is ensuring that the poles of the actual error are at a higher frequency than the ones of the error estimator. Let \( e_{\nu k_i}(s) \) the transfer function of the actual error, where \( s \in \mathbb{C} \). The error estimator, \( e_{est}(s) \), defined in Equation (3.39) has two poles at \( \omega_m \). The poles of the error \( e_{\nu k_i}(s) \) are the poles of \( \tilde{h}_{\nu k_i}(s) \), the poles \( \tilde{h}_{\nu i}(s) \), and the zeros of \( \tilde{h}_{\nu i}(s) \), which need to be at a lower frequency than \( \omega_m \). The poles of $\tilde{h}_{\nu_{i_1}}(s)$ are $\omega_1 \ldots \omega_\nu$. By definition, these poles are at a higher frequency than $\omega_m$. The zeros of $\tilde{h}_{\nu_{i_1}}(s)$ are real numbers, as discussed in the beginning of this proof. Furthermore, it needs to be shown that these zeros are at higher frequencies than $\omega_m$, which is proven by contradiction. Assume that $\omega_0$ is a zero of the transfer function, i.e. $\tilde{h}_{\nu_{i_1}}(s = -\omega_0) = 0$, such that $0 < \omega_0 < \omega_1$. Substituting this into Equation (3.40), the transfer function is $$\tilde{h}_{\nu_{i_1}}(s = -\omega_0) = \sum_{k=1}^{\nu} \frac{b_k^2}{(-\omega_0 + \omega_k)}$$ However, given that $0 \leq \omega_1 \leq \omega_\nu$, then all the terms of the summation are strictly positive. Thus, $\tilde{h}_{\nu_{i_1}}(s = -\omega_0) \neq 0$, reaching a contradiction. Therefore, it is shown that the zeros of the transfer function of the reduced system $\tilde{h}_{\nu_{i_1}}(s)$ are at a higher frequency than $\omega_m$. The pole $\tilde{h}_{\nu k_{i_1}}(s)$, $\omega_{\nu k_{i_1}}$, can be written as $$\omega_{\nu k_{i_1}} = \frac{\tilde{a}}{\tilde{c}} = \frac{\sum_{k=1}^{\nu} b_k^2 (\omega_k + s_e)^2 \omega_k}{\sum_{k=1}^{\nu} b_k^2 (\omega_k + s_e)^2}$$ which can be understood as a weighted summation, where the weight factors are $\frac{b_k^2}{(\omega_k + s_e)^2}$. This shows that the poles of the transfer function satisfy $\omega_1 \leq \omega_0 \leq \omega_\nu$. The location of $\omega_0$ depends on the static gains, $b_k$. The more controllable and observable a mode $\omega_k$ is, the closer $\omega_0$ is to $\omega_k$. Thus, the pole of the transfer function of the reduced system $\tilde{h}_{\nu k_{i_1}}(s)$ is at a higher frequency than $\omega_m$. This concludes the second step, ensuring that the poles of the actual error $e_{\nu k_{i_1}}(s)$ are at a higher frequency than the ones of the error estimator. The third step needs to ensure that the transfer function of the actual error, $e_{\nu k_{i_1}}(s)$, has at least two zeros close to zero. The zeros of the actual error are related with the zeros $\tilde{h}_{\nu k_{i_1}}(s) - \tilde{h}_{i_{i_1}}(s)$. Due to the properties of moment matching Krylov reduction, the reduced-order system matches at least two moments of the transfer function at the expansion point, $s_e$, as explained by Antoulas [7]. By choosing the expansion point sufficiently low, the $\tilde{h}_{\nu k_{i_1}}(s) - \tilde{h}_{i_{i_1}}(s)$ has two zeros close to zero, $s = 0$. Therefore, the transfer function $e_{\nu k_{i_1}}(s)$ has also two zeros close to zero. This proof shows that the error $e_{\nu k_{i_1}}(j\omega)$ has a bounded magnitude at infinity, the poles of the error are a higher frequency than the poles of the estimator, and the error has at least two zeros close to zero frequency. The combination of these three conditions proves that the magnitude of the error $e_{\nu k_{i_1}}(j\omega)$ is bounded by the estimator $|e_{est}(j\omega)|$. The result of Theorem 1 proposes an error bound of the term $e_{\nu k_{i_1}}(j\omega)$ of Equation (3.33). In order to estimate the reduction error, $e_{ii}(j\omega)$, an upper bound of the term $e_{\mu_{i_1}}(j\omega)$ of Equation (3.33) is also required. **Fact 3.** The magnitude of the error $e_{\mu_{i_1}}(j\omega)$ defined in Equation (3.33) is bounded by 1, i.e. $$|e_{\mu_{i_1}}(j\omega)| \leq 1$$ for all $\omega \in [0, \infty]$ such that $\omega \in \mathbb{R}$. **Proof.** The FRF of the error $e_{\mu_{i_1}}(j\omega)$ defined in Equation (3.34) of Fact 2 can be expressed as $$e_{\mu_{i_1}}(j\omega) = \frac{\tilde{h}_{\mu_{i_1}}(j\omega)}{\tilde{h}_{i_1}(j\omega)}$$ considering that $\tilde{h}_{\nu_i}(j\omega) = h_{ii}(j\omega) - \tilde{h}_{\mu_i}(j\omega)$, as shown in Fact 1. In order to prove this fact, the two extreme conditions are considered. Firstly, the case where all modes in $\mathcal{V}_\nu$ are not observable and not controllable is analyzed. In this case, the magnitude of $\tilde{h}_{\nu_i}$ is zero. Thus, the magnitude of the error $|e_{\mu_i}(j\omega)|$ is zero for all frequencies, as all the relevant modes describing the response of the system are contained in $\mathcal{V}_\mu$. Secondly, it is considered that all modes in $\mathcal{V}_\mu$ are not observable and not controllable. In this case, the original system and the system projected into $\mathcal{V}_\nu$ are the same, i.e., $\tilde{h}_{\nu_i} = h_{ii}(j\omega)$. Thus, the magnitude of the error $|e_{\mu_i}(j\omega)|$ is 1, as none of the relevant modes describing the response of the system are included in $\mathcal{V}_\mu$. These two cases represent the two extreme conditions. For any other case, some of the modes $\mathcal{V}_\mu$ and $\mathcal{V}_\nu$ are observable and controllable, leading to a magnitude of the error between 0 and 1. Therefore, $|e_{\mu_i}(j\omega)| \leq 1$ for all frequencies. The results of Theorem 1 and Fact 3 state that an error bound of $e_{ii}(j\omega)$ for the KMS reduction error is $$|e_{ii}(j\omega)| = |e_{\mu_i}(j\omega)||e_{\nu k_i}(j\omega)| < |e_{\nu k_i}(j\omega)| < \frac{\omega^2 + s_k^2}{\omega^2 + w_m^2}$$ (3.55) Theorem 1 shows a theoretical error bound for the KMS reduction, provided that the same inputs and outputs are considered. This error bound estimates a-priori the error and enables to choose how many modes need to be included in the KMS projection basis. Provided a frequency range of interest $[0, \omega_{max}]$, the error estimator determines that all modes under $\omega_m$ need to be included in the reduction basis so that the magnitude of the error of the reduced-order system, $|e_{ii}(j\omega)|$, does not exceed a certain value $\epsilon$. **Numerical example: First order random system** In order to understand the error estimator, the numerical example of Equation (3.19) can be revisited. In order to visualize the error bound, 100 random systems can be generated. The numerical values are similarly as in Section 3.2. - The dimension of the system is $n = 100$ - The values of $w_i$ are logarithmically uniformly distributed between $10^{-2}$ and $10^2$ - The maximum frequency considered for the modal part is $\omega_m = 5$ Figure 3.7 shows the error between the original and the reduced system as well as the error estimator. It can be observed that the error estimator is an upper bound of the error for the whole frequency range. **Numerical example: thermal FE model** Section 3.2 presents a thermal FE model of a table of a machine tool, illustrated in Figure 3.3. The error estimator proposed in this section can be used to select the reduction parameters. Given a certain tolerance $\epsilon$ and frequency range of interest, i.e. $\omega \in [0, \omega_{max})$, the frequency $\omega_m$ can be calculated. The frequency $\omega_m$ determines how many modes are included in the basis $\mathcal{V}_m$. For the FE thermal model under consideration, the following parameters are chosen: - $\epsilon = 0.05$ - $\omega_{max} = 0.01$ rad/s Considering the proposed error estimator, for this FE model $\omega_m = 0.044$ rad/s. This implies that 97 modes are included in the projection basis. The relative error between original system and the reduced system can be evaluated for the different input and output combinations, as presented in Section 3.2. Figure 3.8 compares the error estimator for all input and output combinations. It shows that the error estimator is an upper bound of the error for all frequencies, remaining below 0.5. Therefore, the error estimator ensures that the true error remains below $\epsilon$ for all $\omega \in [0, \omega_{max})$. In fact, the error in the frequency range of interest is smaller than 0.001. Figure 3.7: Relative error between the reduced and original system for 100 random systems Figure 3.8: Relative error between the reduced and original system of the thermal model of the machine tool table of Figure 3.3 for different input and output combinations 3.4 Efficient coupling of the thermal and mechanical model The heat loads as well as the convective boundary conditions lead to an inhomogeneous temperature distribution. The temperature difference results in thermal stress in the structure, leading to thermally induced deformation. This is called thermo-mechanical coupling and is the focus of this section. In thermo-mechanical models, the temperature field and the associated mechanical displacements are weakly coupled. The weak coupling implies that the temperature field affects the structural deformation due to non-zero thermal expansion coefficients. However, the mechanical work resulting from the deformation of the structure does not alter the temperature field. This assumption of weak coupling holds for thermo-mechanical models of mechatronic systems, where only small deviations occur. For the sake of completeness, the PDE describing the thermo-mechanical system is presented. In continuum mechanics, the force balance equation is expressed as \[- \text{div}(\sigma) = f \] (3.56) where $\sigma$ is the stress tensor at a point $z$ and $f$ is the external force vector. Due to the small values of the deviations, a linear elastic model is considered. The stress tensor can be decomposed into an elastic part $\sigma_e$ and a thermal part $\sigma_{th}$. Considering the constitutive equation, they can be expressed as \[\sigma_e = \frac{E}{1 + \nu} \epsilon + \frac{E\nu}{(1 + \nu)(1 - 2\nu)} \text{tr}(\epsilon) I \] (3.57) \[\sigma_{th} = -\frac{E}{(1 - 2\nu)} \alpha (T - T_{ref}) I \] (3.58) where $\epsilon$ is the strain tensor, $E$ is the Young modulus, $\nu$ is the Poisson number, $I$ is the identity tensor, $\alpha$ is the thermal expansion coefficient, and $T_{ref}$ is the reference temperature. Under the assumption of small deformations, the strain tensor can be expressed in its linearized form as the gradient of the displacements $u_s$ as \[\epsilon = \frac{1}{2} (\nabla u_s + \nabla u_s^T) \] (3.59) Similarly to the thermal system, there is no general analytical solution for the mechanical PDE. FEM is a well established numerical method to find an approximate solution to Equation (3.56). The displacement at one point $z$ of the element can be evaluated as \[u_s(z) = n_e^T(z)v^e \] (3.60) where $n_e(z)$ is the vector with the values of the shape functions of the element $e$ at $z$ and $v^e$ are the displacements of the nodes of the element. Applying the principle of virtual work, the weak form of the PDE can be obtained and system of equations for the element $e$ can be expressed as \[K^e v^e = f_{th}^e + f_{ext}^e \] (3.61) where \( K^e \) is the stiffness matrix, \( f_{th}^e \) is the thermal force, and \( f_{ext}^e \) the external mechanical force applied to the element. The stiffness matrix of the element is \[ K^e = \int_{\Omega^e} B^T D B \, dz \] (3.62) being \( D \) the elasticity matrix and \( B \) the spatial derivative of the shape function. The elasticity matrix \( D \) depends on the Young \( E \) and Poisson \( \nu \) modulus. The thermal forces can be expressed as \[ f_{th}^e = \int_{\Omega^e} B^T D \alpha (T(z,t) - T_{ref}(z)) \, dz \] (3.63) where \( \alpha \) is the expansion coefficient vector defined as \( \alpha = \alpha \begin{bmatrix} 1 & 1 & 1 & 0 & 0 \end{bmatrix}^T \). Considering that the temperature distribution at a point \( z \) can be expressed according to Equation (3.5), the thermal forces can be rewritten as \[ f_{th}^e = K_{th}^e (\theta^e(t) - \theta_{ref}^e) \] (3.64) where \( K_{th}^e \) is the thermal coupling matrix of the element \( e \), defined as \[ K_{th}^e = \int_{\Omega^e} B^T D \alpha n^T \, dz \] (3.65) Creating the thermal coupling matrix allows calculating the forces \( f_{th}^e \) for any temperature distribution at any time step. Commercial FE-software packages instead calculate \( f_{th}^e \) for every different temperature distribution and assemble the force for the whole FE-mesh afterwards. After assembling the matrices for all the elements of the FE-mesh, the structural displacements can be obtained by solving the following system of equations \[ Kv = K_{th} \theta(t) + f_{ext} \] (3.66) Similarly to the thermal system, the stiffness matrix \( K \) is symmetric and positive semi-definite (see definition in Equation (3.15)). After constraining the solid rigid DOF, there is a unique solution to the linear system of equations of Equation (3.66). Section 3.1 presents the state space representation of the thermal system. The output of the system in Equation (3.13) can be now extended to include the displacements at certain points of the structure, creating a new, mechanical output. \[ y_{mech}(t) = C_{mech} x(t) = C_i K^{-1} K_{th} x(t) \] (3.67) where \( C_{mech} \) is the output matrix, and \( C_i \) selects the displacements at a structural point \( i \). The structural point of interest is typically the displacement measured at the TCP and workpiece. The transfer function between the thermal inputs \( u(s) \) and the mechanical outputs \( y_{mech}(s) \) in frequency domain can be defined analogously to (3.14) as \[ y_{mech}(s) = C_{mech} (Es - A)^{-1} Bu(s) \] (3.68) Equation (3.67) extends the outputs of the system to include the mechanical response, creating a thermo-mechanical system. The reduced models need to reproduce the thermo-mechanical behavior. Thus, the reduction techniques developed in Section 3.2 are to be extended to efficiently provide the displacement of the structure. There are two options for considering the mechanical outputs, namely increasing the output space or creating a dedicated reduced mechanical model. The first option refers to considering a new output matrix \[ y(t) = \begin{bmatrix} C_{mech} & C_{therm} \end{bmatrix} x(t) \] (3.69) where \(C_{mech}\) is the mechanical output matrix from Equation (3.67) and \(C_{therm}\) is the output matrix from Equation (3.13). The projection bases \(V\) and \(W\) are calculated considering MM Krylov subspace methods, according to Equation (2.15) and (2.16) respectively. This method has several disadvantages from a practical point of view. Firstly, it does not allow including other mechanical inputs to the system (e.g. mechanical preloads, gravity). The thermal inputs are the only inputs determining the response of the system and thus other quasi-static mechanical loads of interest cannot be included. Secondly, the projection bases \(V\) and \(W\) span different subspaces. For this system output, two-sided projection is required. The main disadvantage of two-sided projection is that the preservation of the stability is not guaranteed (see Antoulas [7] for more details on one- vs. two-sided projection). The third disadvantage of this method comes from the workflow in model development. It is common practice to create and validate a mechanical model of the structure and then extend it in order to consider thermo-mechanical effects. Therefore, having a dedicated mechanical model seems reasonable from a practical point of view. In order to formalize the concept of a dedicated model, a new state is defined \(x_{mech}\), which is the displacements at every node of the FE discretization. The quasi-static part of the mechanical response is considered, leading to the following state space representation \[ A_{mech} x_{mech} = B_{mech} u_{mech} = \begin{bmatrix} K_{th} & F_{ext} \end{bmatrix} \begin{bmatrix} x \\ u_{ext} \end{bmatrix} \] (3.70) where \(A_{mech} = K\) is the mechanical system matrix, \(B_{mech}\) the mechanical input matrix, and \(u_{mech}\) the mechanical inputs. The external forces from Equation (3.66) can be expressed as matrix multiplication as \(f_{ext} = F_{ext} u_{ext}\), while the thermal body forces are \(f_{th} = K_{th} x\). The reduction techniques to create dedicated mechanical models need to provide an efficient way to couple the temperature field to the mechanical response. The size of the reduced system depends on the number of inputs, as explained in Section 3.2. The number of inputs of Equation (3.70) is equal to the number of possible independent temperature distributions and the number of external loads. In principle the number of independent temperatures is \(n\), i.e. the dimension of the original thermal system. However, the reduced thermal model determines the temperature distribution. This implies that the possible temperature distributions are limited to a linear combination of the vectors of the reduction basis \(V\). Therefore all the possible linearly independent thermal body forces \(F_{th}\) are \[ F_{th} = K_{th} V \] (3.71) This enables to express the state space representation in terms of the reduced temperature state \(\tilde{x}\) instead of the full states as \[ A_{mech} x_{mech} = \begin{bmatrix} K_{th} V & F_{ext} \end{bmatrix} \begin{bmatrix} \tilde{x} \\ u_{ext} \end{bmatrix} \] (3.72) The number of independent temperature inputs is considerably smaller than the number of independent temperature distributions of the original system of dimension \( n \). The reduction basis \( V_{mech} \) for the mechanical system can calculated as \[ \text{span}(V_{mech}) = K_r \left\{ (s_e I - A_{mech})^{-1}, (s_e I - A_{mech})^{-1} \begin{bmatrix} K_{th} V & F_{ext} \end{bmatrix} \right\} \] (3.73) where the expansion point \( s_e \) is chosen at a low frequency. Algorithm 2 describes the numerical implementation in detail. The main advantage of this method is that the reduction basis \( V_{mech} \) does not need to be computed again after a new transient simulation. In addition, this method does not require the reconstruction of the full temperature state \( x \). Instead it couples directly the reduced state to the mechanical system \( \tilde{x} \). This is a great advantage, as computing the full temperature field is a computationally expensive process. The trade-off is that considering all the possible temperature distributions might result in a reduced model with a larger number of DOF. **Algorithm 2** Thermo-mechanical coupling ``` 1: procedure ThermMechReduction(A_{mech}, F_{ext}, K_{th}, V, s_e) 2: A_{mech}, F_{ext}, K_{th} \quad \triangleright \text{System matrices} 3: V \quad \triangleright \text{Projection basis of the thermal system} 4: s_0 \quad \triangleright \text{Expansion point of the mechanical system} 5: B_{mech} = \begin{bmatrix} K_{th} V & F_{ext} \end{bmatrix} \quad \triangleright \text{Create the input matrix of the thermo-mechanical system} 6: B_{mech} = \text{MODGS}(B_{mech}) \quad \triangleright \text{See Algorithm 9} 7: V_{mech} = \text{BLOCKARNOLDI}(A_{mech}, E = 0, B_{mech}, s_e, m_e = 1) \quad \triangleright \text{See Algorithm 5} 8: return V_{mech} ``` **Numerical example: thermal FE model** The thermo-mechanical model of Figure 3.3 presented in Section 3.2 is revisited to evaluate the mechanical response. In order to have a fully mechanically constrained system, the following boundary conditions and parameters for the mechanical model are defined: - The structure is fixed at the location of the guide carriages - The stiffness value is \( 10^{10} \) Pa in all directions The outputs of the system are measured at the center of the machine tool table, where the workpiece is placed. The thermally induced displacements can be calculated in frequency domain. The input of the system is the fluctuation of the environmental temperature at frequencies from \( 10^{-5} \) to 1 rad/s, as illustrated in Figure 3.9. Similarly to the temperature response, the system has a decaying amplitude at higher excitation frequencies. The thermo-mechanical reduced system is computed according to the reduction method of Algorithm 2. Section 3.3 presents the reduction parameters chosen for the thermal FE model under consideration. A maximum frequency of interest \( \omega_{max} = 0.01 \) rad/s and a tolerance \( \epsilon = 0.05 \) is chosen. These parameters result in a thermal system, defined by the projection matrix $V$, of dimension 100. The mechanical inputs and outputs also need to be considered, resulting in a thermo-mechanical reduced model of order 136. The thermo-mechanical response of the full and reduced system is evaluated. The thermal input of the system is the environmental temperature oscillations and the outputs are the X-, Y- and Z-displacements measured at the center of the table. Figure 3.10 presents the relative error between the reduced and full model for the frequency range between $10^{-5}$ to 1 rad/s. Similarly to the thermal system, the relative error is negligible at low frequencies, i.e. the steady state, and increases at higher frequencies. The reduced model is selected for a frequency of interest $\omega_{max} = 0.01$ rad/s, where the relative error is expected to remain below $\epsilon = 0.05$. For the thermo-mechanical model under consideration the error remains below 0.01 in the frequency range of interest, as predicted by the error estimator presented in Section 3.3. The main advantage of this coupling method is that it couples directly the thermal reduced states with the reduced thermo-mechanical model, without the need to reconstruct the full temperature field. Figure 3.10: Relative error between the original and the reduced FRF response shown in Figure 3.9 Model order reduction with varying boundary conditions Chapter 3 introduces the MOR methods for thermo-mechanical systems. The surrogate models reproduce efficiently the behavior of the original model. If the physical parameters describing the original model are modified, the reduced model is no longer valid. However, the thermal boundary conditions are exposed to variations over time. Therefore, any change in the boundary conditions requires reducing the original model again, which is a computationally expensive process. This motivates the development of MOR techniques that enable the traceability of physical parameters. As identified in Section 2.7, there are two boundary conditions for thermo-mechanical models of mechatronic systems that change over time, namely position dependency and varying convective boundary conditions. Section 4.1 describes the interfaces of a thermal model. The interfaces determine the locations where the thermal loads are applied and temperature outputs are measured. Section 4.2 introduces the methods that enable to modify the position of the axes after the reduction. Section 4.3 presents the developed MOR techniques for the modification after reduction of the parameters describing the convective heat exchange with the environment. 4.1 Definition of interfaces for thermal systems Before introducing the parametric MOR techniques, this section defines the interfaces of a thermal FE model. In the context of this work, an interface refers to the boundaries of the model where the inputs are applied and the outputs are measured. An interface $i$ is defined as a boundary $\Gamma_i$ of the whole domain $\Omega$ of the thermal model. The interface is union of the boundaries $\Gamma^e$ of all the elements at the boundary $\Gamma_i$. Let $w(z)$ be a scalar function defined for every point $z$ at the boundary $\Gamma_i$. On one hand, this function can represent a thermal input to the system, such as a heat flux. On the other hand, it can represent an output, such as a weight function of the temperature field. Section 3.1 defines the concept of the shape function, which is at the basis of the FEM. The shape functions $n_e(z)$, defined for every node of the element $e$, interpolate the temperature values at the nodes to any point $z$ inside the element. The shape function provides a way to map the scalar function $w(z)$ to a vector of nodal values $\mathbf{w}^e$. This is performed by the following integral over the boundary of the element $e$, $\Gamma_e$, as $$\mathbf{w}^e = \int_{\Gamma_e} \mathbf{n}_e(z) w(z) dz$$ \hspace{1cm} (4.1) Each of the nodal values $\mathbf{w}^e$ represents how much of the weight of the scalar function is assigned to each node of the element. The integral in Equation (4.1) is evaluated numerically as a finite summation by means of Gaussian quadrature. Appendix B provides more details on the numerical integration in the FEM. The function $w(z)$ can be any scalar, continuous function defined at the boundary $\Gamma_e$. An particular case is when it is $w(z) = 1$. On one hand, this particular case represents an input of a homogeneous heat flux. In fact, Equation (4.1) is equivalent to Equation (3.10), where the heat flux in the FEM system is defined. On the other hand, Equation (4.1) evaluates the mean temperature over the element as $$\hat{T}^e = \frac{(\mathbf{w}^e)^T \mathbf{\theta}^e}{A_{\Gamma_e}}$$ \hspace{1cm} (4.2) where $\mathbf{\theta}^e$ are the nodal temperature values and $A_{\Gamma_e}$ the area of the boundary $\Gamma_e$ at the element $e$. Equation (4.2) can be understood as a weighted average, where the values of the vector $\mathbf{w}^e$ are the weight for each nodal temperature. From the definition of $\mathbf{w}^e$ in Equation (4.1), the sum of all the weights is equal to the area $A_{\Gamma_e}$. The definition of the input and output can be extended from the element $e$ to all the elements of the mesh. The vector $\mathbf{w}^e$ can be assembled for all the elements resulting in the vector $\mathbf{b}$. Figure 4.1 depicts the values of the vector $\mathbf{b}$ over the interface $\Gamma$. On one hand, an external heat flux of magnitude $q$ can be defined then as $$q_{ext} = \mathbf{b} \hat{q}$$ \hspace{1cm} (4.3) On the other hand, the mean temperature over $\Gamma$ is $$\hat{x}_{\Gamma} = \frac{\mathbf{b}^T \mathbf{x}}{A_{\Gamma}} = \mathbf{b}_n^T \mathbf{x}$$ \hspace{1cm} (4.4) where $A_{\Gamma}$ is the area of $\Gamma$ and $\mathbf{b}_n$ is the vector normalized by the area. The relationship between inputs and outputs can be further exploited to evaluate the heat transfer between two structural parts. The Robin boundary condition of Equation (3.3) defines the thermal contact between two domains $\Omega_1$ and $\Omega_2$. The contact between parts happens at the boundaries $\Gamma_1$ and $\Gamma_2$ respectively. Compared to other types of boundary conditions, the thermal contact occurs at localized areas. For example, in thermo-mechanical models of machine tools the thermal contact occurs at the bearing or between the guideways and guide carriages. Therefore, it is customary to assume that the temperature distribution at contact area $\Omega_1$ and $\Omega_2$ is homogeneous [40, 72]. Under this assumption, the heat flux transferred between the two parts is proportional to the difference of the mean temperature at the two boundaries, i.e. $$\dot{q} = h \left( \bar{x}_{\Gamma_1} - \bar{x}_{\Gamma_2} \right) = h (\mathbf{b}_n|_{\Gamma_1} - \mathbf{b}_n|_{\Gamma_2})^T x$$ where $h$ is the HTC or thermal contact conductivity (TCC). The heat flux is then applied to $\Gamma_1$ and $\Gamma_2$ according to Equation (4.3) as $$q = \dot{q}(\mathbf{b}|_{\Gamma_1} - \mathbf{b}|_{\Gamma_2}) = h (\mathbf{b}|_{\Gamma_1} - \mathbf{b}|_{\Gamma_2})(\mathbf{b}_n|_{\Gamma_1} - \mathbf{b}_n|_{\Gamma_2})^T x$$ Equation (4.6) defines the contact between two parts, coupling the thermal models of two different domains. In the context of this work, the boundaries describing this type of coupling are called bushing interfaces. The term bushing interface has its counterpart in distributed interfaces, which are introduced in Section 4.3.1. The bushing interfaces can be used for defining stationary loads and thermal contact between two different parts. The main advantage of defining a thermal contact with bushing interfaces is that only the input ($B$) and output ($C$) matrices of Equation (3.12) are involved. This assumption decouples the parameter $h$ defining the thermal contact from the system matrix $A$. Therefore, the projection basis $V$ calculated according to the KMS algorithm in Section 3.2 remains valid. However, the simplifying assumption of the bushing interface is no longer valid for large contact areas. If a stationary thermal contact between two large parts is defined, there are well-established contact algorithms, such as Multi Point Constraint (MPC) [15]. Another case of thermal boundary conditions over large areas corresponds to convection. In both cases, convection and thermal contact over large areas, the value of the HTC $h(t, z)$ cannot be easily decoupled from the system matrix $A$. The HTC parameter $h(t, z)$ can be, in principle, any spatial distribution which varies over time. Dedicated parametric MOR techniques handle the traceability of the parameter $h(t, z)$ after reduction. Section 4.3 introduces the parametric reduction focusing on convective boundary condition. ### 4.2 Moving boundary conditions Mechatronic systems consists in several parts that can move relative to each other. In order to investigate the thermal behavior of the system, the models need to represent the thermal response at different positions. Section 4.1 introduces the concept of bushing interfaces and its application to thermal contact. However, the definition of the bushing interfaces assumes that the contact zone $\Gamma_1$ is stationary. If there is a relative movement between the two parts, then $\Gamma_1$ depends on the relative position between the two parts. Therefore, the contact area depends on a parameter $s$, that defines the position of the parts along a trajectory. In the context of this work, the interface defining the contact between two parts with relative movement is called moving interface. The moving interfaces are an extension of the Fourier interfaces, introduced by Spescha [113, 115] for static and dynamic models. The definition of the moving interfaces enables the traceability of the position dependency, described by the parameter $s_c$, after the reduction of the system. **Trigonometric approximation of a moving heat load** A moving interface defines a non-stationary contact between two parts. The contact between guideway and guide carriage or ballscrew and nut are examples of moving thermal contacts in machine tools. Figure 4.2 shows systematically the contact between a guideway, which is the stationary part, and the guide carriage, which is the moving part. The area of contact $\Gamma_1$ changes at different relative positions between the two parts. ![Figure 4.2: Representation of a moving interface](image) Firstly, the concept of a moving interface needs to be formalized. Let $z(s) \in \mathbb{R}^3$ be a trajectory in the space, where $s \in [0, 1]$ define any point along the trajectory. Let $\Gamma_1$ be the contact area that can move along the trajectory $z(s)$, being $s_c$ the actual position. The heat exchange between the two parts happens only at the contact area. Therefore, a weight function $w(s, s_c)$ can be defined that determines where the heat transfer occurs. Figure 4.2 shows the weight function defining the thermal contact between the guideway and guide carriage. The weight function satisfies the following conditions: $$w(s, s_c) = 0 \quad \forall s \notin \Gamma_1$$ $$\int_0^1 w(s, s_c) ds = 1$$ The first condition represents that the heat transfer is zero outside the contact area. The second condition states just a normalization of the weight function. However, the two conditions do not define $w(s, s_c)$ completely. Therefore, the weight function of Equation (4.9) is proposed. Figure 4.3 depicts heat distribution defined in Equation (4.9). The weight function has a trapezoidal distribution of the weight centered around $s_c$, with nonzero value along a contact zone of length $L$. $$w(s, s_c) = \begin{cases} 0 & : s \in [0, s_c - \frac{L}{2}] \\ \frac{9}{L^2}(s - s_c + \frac{L}{2}) & : s \in (s_c - \frac{L}{2}, s_c - \frac{L}{3}] \\ \frac{3}{2L} & : s \in (s_c - \frac{L}{3}, s_c + \frac{L}{3}] \\ \frac{9}{L^2}(-s + s_c + \frac{L}{2}) & : s \in (s_c + \frac{L}{3}, s_c + \frac{L}{2}] \\ 0 & : s \in (s_c + \frac{L}{2}, 1] \end{cases}$$ The weight function defines the heat input to the thermal system. The weight function can be defined for any value of the infinite values of the parameters $s_c \in [0, 1]$. This infinite number weight function needs to be discretized to a finite number, in order to have a finite number of inputs to the thermal system. One alternative is to sample the parameter $s_c$ at several values and define for every sample $s^k_c$ a weight function $w^k(s, s^k_c)$. The weight function for any value of $s_c$ can be calculated interpolating between the closest samples $w^k(s, s^k_c)$. However, the main drawback of this approach is that the quality of the approximation of the contact area depends on the number of samples of the weight function $w^k(s, s^k_c)$. In order to have a smooth representation of the contact area, a large number of samples is required. This results in an increase in the number of inputs and therefore in the dimension of the reduced system. This work proposes another alternative in order to describe the parametric dependency of $w(s, s_c)$. The weight function can be described as a finite summation of $n$ terms, decoupling the parameter $s_c$ from the variable. This is called an affine representation of $w(s, s_c)$ and can be expressed as $$w(s, s_c) = \sum_{k=1}^{n} w_k(s_c) g(s)$$ \hspace{1cm} (4.10) The affine representation of the parametric dependency of the weight function is achieved by means of a trigonometric approximation. The function $w(s, s_c)$ is approximated as a summation of $n_h$ harmonics and a constant term, as expressed in the following equations $$w(s, s_c) = a_0 + \sum_{k=1}^{n_h} a_k(s_c) \cos(k2\pi s) + \sum_{k=1}^{n_h} b_k(s_c) \sin(k2\pi s)$$ \hspace{1cm} (4.11) $$a_0 = \int_0^1 w(s, s_c) ds$$ \hspace{1cm} (4.12) $$a_k(s_c) = \int_0^1 w(s, s_c) \cos(k2\pi s) ds$$ \hspace{1cm} (4.13) \[ b_k(s_c) = \int_0^1 w(s, s_c) \sin(k2\pi s) ds \] (4.14) Evaluating the integrals of the Fourier series for the weight function defined at Equation (4.9), the following coefficients are obtained. \[ a_0 = 1 \] (4.15) \[ a_k(s_c) = \frac{9}{2L^2k^2\pi^2} \cos\left(\frac{k\pi L}{3}\right) \sin^2\left(\frac{k\pi L}{3}\right) \cos(k2\pi s_c) \] (4.16) \[ b_k(s_c) = \frac{9}{L^2k^2\pi^2} \cos\left(\frac{k\pi L}{3}\right) \sin^2\left(\frac{k\pi L}{3}\right) \cos(k2\pi s_c) \sin(k2\pi s_c) \] (4.17) From the properties of the Fourier approximation, increasing the number of harmonics results in a better approximation of the weight function. Figure 4.4a and 4.4b show the approximation of a contact area of length \( L = 0.25 \) with 2 and 4 harmonics respectively. The higher the number of harmonics, the better approximation of the weight function. More interestingly, the number of harmonics required for a good approximation is inversely proportional to the size (\( L \)) of the contact area. The smaller the values of the parameter \( L \) defining the contact zone, the steeper the weight function and therefore the higher the number of harmonics required to approximate it. Furthermore, the higher the number of harmonics the smaller the non-zero values of the weight outside the contact zone. Figure 4.4 depicts the trigonometric approximation of the contact area for different values of \( L \). A good compromise between the number of harmonics and the quality of the approximation can be achieved with the following rule of thumb, \[ n_h = \frac{1}{L} \] (4.18) Equation (4.10) provides an affine representation of the weight function with a finite number of terms. This representation of the weight function can be used to calculate the heat input to each of the node, according to Equation (4.1). The nodal values for each harmonic \( g(s) \) can be calculated evaluating the integral for the element \( e \) along the trajectory \( z(s) \) \[ w_k^e = \int_{\Gamma_e} n_e(z) \cos(k2\pi s) dz \] (4.19) The vector \( w_k^e \) can be assembled for all the elements resulting in the vector \( b_k \) for the \( k \) harmonic. The \( b_k \) depends only on the trajectory and not on the position of the heat source \( s_c \). Finally, the nodal values \( b(s_c) \) at the position \( s_c \) can be calculated adding the different harmonics with their corresponding weights as \[ b(s_c) = \sum_{k=1}^{n_h} w_k(s_c)b_k \] (4.20) Figure 4.5 shows a moving interface along a trajectory \( z(s) \). The nodal values \( b_k \) for the different harmonics are depicted as well as the resulting nodal weights \( b(s_c) \) at \( s_c = 0.5 \). **Numerical example: thermal FE model** The approach of the trigonometric approximation of a moving thermal contact is illustrated with an example. There are several examples of thermal moving contacts in thermo-mechanical models of machine Figure 4.4: Trigonometric approximation of the contact area. Black: nominal contact area. Red: approximated contact area Figure 4.5: Nodal weights of a moving interface tools, such as ballscrew, linear guides, bearings or linear motors. This section presents a simplified model of a guideway and guide carriage. Figure 4.6 shows the geometry as well as the FE discretization of the model. The focus of the model of Figure 4.6 is the moving thermal contact between the two parts. The moving thermal contact of the guideway can be defined as in Equation (4.9). The normalized contact length $L$ is 0.25 for the guideway under consideration. A trigonometric approximation of the moving thermal contact is applied, leading to a distribution of the nodal weights as shown in Figure 4.5. Equation (4.18) relates the number of harmonics $n_h$ with the contact length $L$, resulting in $n_h = 4$. On the guide carriage a stationary thermal interface, i.e. bushing interface, is defined, as explained in Section 4.1. In order to enable the heat exchange between the guideway and guide carriage, a TCC is defined coupling the thermal models of the two parts. In order to define the thermal model, the following boundary conditions are applied: - Convection on the remaining boundary conditions with a HTC of 5 $\frac{\text{W}}{\text{m}^2\text{K}}$ - Constant environmental temperature $T_{env} = 0$ - Thermal contact between parts with TCC of 100 $\frac{\text{W}}{\text{K}}$ - Heat flow of 1 W applied to the guide carriage The heat flow applied on the guide carriage represents the heat dissipated by a friction movement between the two parts. The reference environmental temperature $T_{env}$ is set to a homogeneous value of 0 °C. The resulting temperature distribution on the structure can be interpreted as the temperature increase $\Delta T$ from any other homogeneous reference temperature. Figure 4.7 shows the temperature distribution of the guideway and guide carriage at four different positions different positions $s_c$, namely 0.25, 0.375, 0.5, and 0.625. The heat flows from the moving thermal contact to the structure of the guide carriage and guideway. The resulting temperature distribution varies with the relative position of the parts. The trigonometric approximation of the moving thermal contact can be compared to a stationary thermal contact. In order to perform this comparison, a thermal bushing interface with a contact length $L = 0.25$ can be defined. This bushing interface can be moved at different positions $s_c$ of along the path, at the same positions evaluated in Figure 4.7. The temperature distribution with a thermal contact to stationary bushing interface serves as a reference to numerically validate the trigonometric approximation proposed in Equation (4.11). The mean value of the temperature at the contact zone at each of the positions is evaluated for the contact with bushing interfaces $\hat{T}_{bushing}$, which serves as the reference case, and the trigonometric approximation $\hat{T}_{fourier}$. Table 4.1 shows the means values of the temperatures as well as the relative errors. The mean values are calculated using the nodal values of Equation (4.20) as an output matrix. The difference in mean values of the contact zone remains below 0.13 % for the four positions evaluated. Table 4.1 compares the mean value of the temperature. However, it is also interesting to evaluate the temperature distribution along the trajectory $z(s)$. The mean value of the temperature with the stationary interface is $\hat{T}_{bushing}(s)$ while the mean value with the trigonometric approximation is $\hat{T}_{fourier}$. A relative error can be defined as \begin{table}[h] \centering \begin{tabular}{c c c c} Position $s_c$ [-] & $\hat{T}_{bushing}$ [K] & $\hat{T}_{fourier}$ [K] & Relative error [-] \\ \hline 0.250 & 0.015076 & 0.015083 & 0.00048 \\ 0.375 & 0.014275 & 0.014261 & 0.00097 \\ 0.500 & 0.013944 & 0.013927 & 0.00120 \\ 0.625 & 0.014003 & 0.013985 & 0.00124 \\ \end{tabular} \caption{Mean temperature at the thermal contact zone. Comparison of a stationary contact ($\hat{T}_{bushing}$) with a trigonometric approximation of a moving contact ($\hat{T}_{fourier}$)} \end{table} \[ e(s) = \frac{\hat{T}_{bushing}(s) - \hat{T}_{fourier}(s)}{\hat{T}_{bushing}(s)} \] (4.21) Figure 4.8 illustrates the relative error \( e(s) \) along the trajectory for different positions of \( s_c \). It can be observed that the error is small around the contact area, i.e. close to \( s_c \), while it increases for location further away from the heat source. The relative error \( e(s) \) between the two contact regions remains below 2.2% along the whole trajectory. ![Figure 4.8: Relative error \( e(s) \) of the temperature distribution of the trigonometric approximation of the moving contact. The relative error \( e(s) \) is evaluated along the trajectory for different positions of \( s_c \).](image) ### 4.3 Varying convective boundary conditions This section presents a reduction method to trace the parameters describing the convective boundary conditions after reduction. The Section 4.3.1 reviews the definition of convection in FE thermal models, introducing the concept of distributed interface. The Section 4.3.2 and Section 4.3.3 present two different reduction approaches to enable the traceability of the HTC after reduction. #### 4.3.1 Definition of interfaces for thermal systems In order to develop the parametric reduction method, the introduction of convection in the FE discretization needs to be reviewed. Equation (3.3) defines the Robin boundary condition, stating that the heat flux applied in the boundary is proportional to the temperature difference between the structure and the surrounding fluid temperature. The proportionality constant between the heat flux and temperature difference is the HTC, \( h(z, t) \). The HTC is defined for every point \( z \) of the boundary \( \Gamma_{conv} \), as defined in Equation (3.3). Section 3.1 provides a detailed explanation of the derivation of the discretization of the heat transfer equations, leading to discretized FE system of Equation (3.11) and its state space representation of Equation (3.12). The FE discretization presented in Equation (3.11) allows any spatial representation of the HTC on the boundaries. FE models usually discretize the spatial distribution into a finite number of convective boundaries as \[ \Gamma_{conv} = \bigcup_{i=1}^{n_{dist}} \Gamma_i \] (4.22) For every boundary \( \Gamma_i \), a function \( h_i(z, t) \) describing the HTC is defined. It can be further assumed that a separation between the spatial and the temporal distribution is possible, such that \[ h_i(z, t) = h_i(t)f_i(z) \] (4.23) The separation of the function \( h_i(z, t) \) implies that the spatial distribution of the HTC, \( f_i(z) \), does not change over time and only the magnitude of the HTC is time dependent. If the spatial distribution of the HTC is different at different time steps, it can always be found a more refined discretization of the boundaries, as in Equation (4.22), so that the spatial distribution of the HTC is constant over time. In most of the applications, a spatially homogeneous boundary \( f_i(z) = 1 \) is enough to describe the convection. The discretization of the convective boundaries of Equation (4.22) in combination with the separation of the spatial and temporal dependencies of Equation (4.23) enables to express the state space representation of Equation (3.12) as \[ E\dot{x}(t) = A_d x(t) + \sum_{i=1}^{n_{dist}} h_i(t)D_i x(t) + Bu(t) \] (4.24) The system matrix of the Equation (4.24) is \( A_d(t) = A + \sum_{i=1}^{n_{dist}} h_i(t)D_i \). The time dependency of the system can be understood as a parametric dependency, where \( h_i \) are a set of \( n_{dist} \) parameters. This can be understood as an affine representation of the system matrix. \( D_i \) are the convection matrices, as defined at Equation (3.9) and further explained in Appendix B. The convection matrix \( D_i \) is a symmetric matrix with zeros in all the DOF not included at the boundary \( \Gamma_i \). Figure 4.9 presents an example of a beam exposed to a convective boundary condition to an environmental temperature \( T_{air} \). Due to the convection, each of the nodes of the boundary is exposed to a heat flow \( q_i \). Considering that in Equation (4.24) the convection matrix \( D_i \) is zero outside the area of convection, the heat flow \( q_i \) applied to the node \( i \) is proportional to the difference between the temperature \( x_i \) at the node and the environmental temperature, i.e. \( q_i \propto x_i - T_{air} \). These types of interfaces are distributed interfaces, as opposed to the bushing interfaces defined in Section 4.1. The heat flow of a bushing interface is proportional to the difference between the mean value at the interface \( \hat{x}_T \), as defined in Equation (4.4), and the external temperature. This external temperature can be the mean value of the temperature of the contact area of another part or, as in the example of Figure 4.9, the environmental temperature \( T_{air} \). This proportionality can be expressed as \( q_i \propto \hat{x}_T - T_{air} \). There lies the main difference between distributed and bushing interfaces. The bushing interface approximates the Robin boundary condition by considering the mean value of the temperature while the distributed interface considers that the heat flow introduced to each node is proportional to the temperature of each node. Therefore, the distributed interfaces modify the system matrix, as expressed in Equation (4.24). Distributed interfaces require the development of MOR reduction approaches that can handle the parametric dependency of the system matrix. One straightforward alternative to include distributed interfaces in MOR is considering the heat flows $q_i$ applied to each of the nodes as independent inputs. At every node of $\Gamma$ a heat can be applied independently of the value of the $x_i - T_{air}$. However, the size of the reduced system is directly proportional to the numbers of inputs in the KMS method. Therefore, this approach is only feasible if the boundary $\Gamma$ has a small number of nodes. This is not the case for thermal models of complex mechatronic systems. Considering the example of Figure 1.1, the original system is composed of 124667 thermal DOF. Among all the nodes of the FE discretization, 40012 of the nodes are located on the convective boundary conditions, which corresponds to over 30% of the original DOF. Considering all these nodes at the convective boundary as independent inputs results in an unfeasibly large reduced system. In the following section, two parametric reduction approaches to deal with the convective boundary conditions are introduced. ### 4.3.2 Parametric reduction with a global reduction basis: bilinearization Section 2.4 classified the parametric MOR approaches into two groups: local basis and global basis. MOR techniques with local basis create several reduced systems at sampled values of the parameters, while the global reduction basis is valid at several values of the parameters. This section presents a parametric MOR method with global basis to trace the changes of the HTC, which is the parameter describing the convective boundary conditions. The aim of this method is creating a projection basis $V$ that projects the system Equation (4.24) into a reduced subspace $V$ for any value of the HTC $h_i$. Before introducing the reduction method, a closer inspection of the system of Equation (4.24) is required. The system matrix is separated into two terms: a constant term $A$ and finite sum of terms of $n_{dist}$ terms $h_i D_i$ depending on the parameter $h_i$. This is called an affine representation of the parameter dependence. The system of Equation (4.24) with an affine representation of the system matrix can be be also understood as special class of non-linear systems. The term $\sum_{i=1}^{n_{d,i}} h_i D_i x(t)$ can interpreted as an input of the system that depends linearly on the state. These class of non-linear systems are called bilinear. Bilinear systems are linear in the input and state, but not jointly linear in the input and state. This special class of non-linear systems appear e.g. in control theory [85], RC-circuits with non-linear resistors [23] or thermal model of an electrical motor [25]. Section 4.3.1 introduced the concept of distributed interfaces to model convective boundary conditions. The convection matrix at the distributed interface $i$ is $D_i$. The convection matrix is a diagonal matrix with zeros for all the DOF not affected by the distributed interface $i$, i.e. $$D_i = \text{diag}(b_k) \quad b_k \neq 0 \quad \forall k \in \Gamma_i \quad k = 1 \ldots n$$ (4.25) where $b_k$ are nodal weights, $\Gamma_i$ is the set of nodes at the distributed interface $i$, and $n$ is the dimension of the original system. Let $n_{d,i}$ be the number of DOF in $\Gamma_i$. The convection matrix $D_i$ can be understood as a linear map $D_i$. The dimension of the range of $D_i$, $n_{d,i}$, is equal to the number of DOF at the distributed interface $i$ $$\dim(\text{range}(D_i)) = n_{d_i}$$ (4.26) Another important consideration for the development of the reduction method is determining the dimension of the subspace spanned by $(s_c E - A)^{-1} D_i$. The matrix $(s_c E - A)^{-1}$ can be understood as a linear transformation, $A$. Given that $(s_c E - A)^{-1}$ is invertible, $A$ is a bijective linear transformation, i.e. $\dim(\text{range}((s_c E - A)^{-1})) = n$. Thus, $$\dim(\text{range}(A \circ D_i)) = n_{d_i}$$ (4.27) The intersection of the subspaces spanned by the several convection matrices also needs to be discussed. In thermo-mechanical models of mechatronic systems, the different convective boundary conditions affect different surfaces. Thus, it can be stated that $\Gamma_i \cap \Gamma_j = 0 \quad \forall i \neq j$. Therefore $$\text{range}(D_i) \cap \text{range}(D_j) = 0 \quad \forall i \neq j$$ (4.28) This can be extended to the range of $A \circ D_i$ as $$\text{range}(A \circ D_i) \cap \text{range}(A \circ D_j) = 0 \quad \forall i \neq j$$ (4.29) Let $V_{KMS}$ be the KMS reduction basis of the system without parametric convective boundary conditions. The basis $V_{KMS}$ spans the linear subspace defined in Equation (3.18). All the states in the KMS are a linear combination of the columns of $V_{KMS}$, i.e. $x \in \text{span}(V_{KMS})$. In order to consider the consider the state-dependent inputs, $D_i x$, an iterative approach is applied. The states multiplied by the convection matrix, i.e. $D_i V_{KMS}$, provide a new set of inputs for the reduction. The matrix $D_i V_{KMS}$ can be added as further inputs to the Krylov subspace of Equation (3.16), providing a new set of vectors $V_i^1$. This set of new basis vectors multiplied by the convection matrix, i.e. $D_i V_i^1$, provide new inputs. They can be added to the Krylov subspace delivering a new set of basis vectors $V_i^2$. This iterative process can be expressed as \[ \text{span}(V_{KMS}) = \text{span}((s_e E - A)^{-1} B) + \text{span}(V_\mu) \] \[ V_i^1 = (s_e E - A)^{-1} D_i V_{KMS} \] \[ \ldots \] \[ V_i^k = (s_e E - A)^{-1} D_i V_i^{k-1} \] \[ \ldots \] \[ V_i^{n_{me}} = (s_e E - A)^{-1} D_i V_i^{n_{me}-1} \] where \( n_{me} \) is the number of iterations. The sum of the subspaces spanned by the matrices \( V_{KMS}, V_i^1, \ldots, V_i^k, \ldots, V_i^{n_{me}} \) is the reduced subspace \( V_i \) \[ V_i = \text{span}\{V_{KMS}\} + \text{span}\{(s_e E - A)^{-1} D_i V_{KMS}\} + \ldots \] \[ + \text{span}\{((s_e E - A)^{-1} D_i)^{n_{me}-1} V_{KMS}\} \] (4.31) Comparing the definition of \( V_i \) with the definition of the Krylov subspace, it can be stated that \[ V_i = K_{n_{me}}\{(s_e E - A)^{-1} D_i, V_{KMS}\} \] (4.32) For the creation of the subspace \( V_i \), \( n_{me} < n_d \), terms are selected. This process of adding new vectors to the reduction basis \( V_{KMS} \) can be understood as a Krylov reduction of the subspace spanned by \( (s_e E - A)^{-1} D_i \). The reduced subspace can be calculated for all the \( n_{dist} \) convective boundary conditions. Given \( V_i \) and \( V_j \) for two different convective boundary conditions, it can be stated that \[ V_i \cap V_j = \text{span}(V_{KMS}) \] (4.33) which comes from the properties of the convection matrices shown in Equation (4.29). The reduction subspace \( V_p \) is the addition of subspaces \( V_i \) as \[ V_p = \sum_{i=1}^{n_{dist}} V_i = \text{span}(V) \] (4.34) where \( V \) is an orthonormal basis of the subspace \( V_p \). The KMS reduction considers certain modes of the system, \( V_\mu \), in order to form the projection basis. The KMS method selects modes up to a certain frequency. The error estimator ensures that the error to the original system remains below a given tolerance for the frequency range of interest. However, the system matrix changes with the introduction of the convective boundary conditions, as shown in Equation (4.24). Thus, the eigenvalues and eigenvectors of the system change. It is required to determine if more modes need to be considered in order to ensure that the error remains bounded below a certain \( \epsilon \) in the frequency range of interest. In order to investigate this, the Weyl’s Inequality Theorem is presented as a preliminary result. Theorem 2 bounds the eigenvalues of the sum of two matrices \( M + N \) given the eigenvalues of \( M \) and \( N \). **Theorem 2.** *Weyl’s Inequality* Let \( M, N \) be symmetric matrices of dimension \( \mathbb{R}^n \) and let \( S = M + N \). Let \( \alpha_1 \geq \alpha_2 \geq \ldots \alpha_n, \beta_1 \geq \beta_2 \geq \ldots \beta_n, \) and \( \gamma_1 \geq \gamma_2 \geq \ldots \gamma_n \) be the eigenvalues of the matrices \( M, N, \) and \( S \) respectively. Then, \[ \gamma_j \leq \alpha_i + \beta_{j-i+1} \quad \forall i \leq j \] (4.35) **Proof.** See Bhatia [20] Chapter 3. The result of Theorem 2 can be used to analyze the properties of the eigenvalues of the system of Equation (4.24) with increasing values of the HTC \( h_i \). **Theorem 3.** Let the system matrix be \( A^i_d = A + \sum_{i=1}^{n_{dist}} h^i_i D_i \), where \( h^i_i \geq 0 \) is a sample of the \( i \)th parameters defining the \( n_{dist} \) convective boundary conditions. Let \( 0 \geq \alpha^I_1 \geq \alpha^I_2 \geq \ldots \alpha^I_n \) be the eigenvalues of \( A^I_d \). Let the system matrix be \( A^m_d = A + \sum_{i=1}^{n_{dist}} h^m_i D_i \), where \( h^m_i \) is another sample of the \( i \) parameters such that \( h^m_i \geq h^I_i \geq 0 \). Let \( 0 \geq \alpha^m_1 \geq \alpha^m_2 \geq \ldots \alpha^m_n \) be the eigenvalues of \( A^m_d \). Then, \( \alpha^m_j \leq \alpha^I_j \) for \( j = 1 \ldots n \). **Proof.** The system matrices \( A \) and \( D_i \) are negative semi-definite, i.e. all the eigenvalues are real and non-positive. The result of Theorem 2 can be particularized for \( i = j \) as \[ \gamma_j \leq \alpha_j + \beta_1 \] (4.36) Additionally, considering a negative semi-definite matrix \( N \) it can be stated that \[ \gamma_j \leq \alpha_j + \beta_1 \leq \alpha_j \] (4.37) as \( \beta_1 \leq 0 \). Let \( \delta h_i \) be the difference between \( h^m_i \) and \( h^I_i \) for \( i = 1 \ldots n_{dist} \), i.e. \( \Delta h_i = h^m_i - h^I_i \). From the statement of this theorem, \( \Delta h_i \geq 0 \). The system matrices can be expressed as \( A^m_d = A^I_d + \sum_{i=1}^{n_{dist}} \Delta h_i D_i \). Applying the particularization of the Theorem 2 to negative semi-definite matrices of Equation (4.37), the following result is obtained \[ \alpha^m_j \leq \alpha^I_j \] (4.38) The result of Theorem 3 states that the higher the HTC, the more negative the eigenvalues of the system are. The physical interpretation of this theorem is that the higher the convective heat exchange with the surrounding, the faster the time constants of the system are. On the other side, a system with low HTC evacuates less efficiently the heat and thus has slower time constants. Theorem 3 can be particularized for the case that the initial HTC are equal to zero, i.e. \( h^I_i = 0 \) for all \( i = 1 \ldots n_{dist} \). This is the case of the first iteration of the bilinearization, as shown in Equation (4.30). The eigenvalues of the system without convective boundary conditions are less negative. Thus, the eigenfrequencies of the selected eigenmodes in \( V_\mu \) increase in magnitude after including the convection matrices. Therefore, the reduced system with proposed bilinearization remains valid in the frequency range of interest. The proposed parametric MOR approach extends the KMS projection basis to include the parametric dependency of the HTC. Therefore, the dimension of the reduced system increases with respect to the non-parametric reduced model. The dimension of the parametric model, \( r_p \), depends on the dimension of the number of convective boundary conditions, $n_{dist}$, the number of iteration, $n_{me}$, and the dimension of the non-parametric reduced system, $r$, as $$r_p = r(1 + n_{me}n_{dist}) \quad (4.39)$$ The implementation of bilinear reduction combined with KMS is summarized in Algorithm 3. **Algorithm 3 Bilinear Reduction** ``` procedure BILINEARREDUCTION(A, D_i, E, B, s_e, m_me, m_d) A, E, B ▶ System matrices D_i ▶ Distributed interface matrix i ∈ [0, n_dist − 1] ω_m ▶ Maximum considered eigenfrequency n_max, n_guess ▶ Maximum number of modes and guessed number of modes below ω_m s_e, m_me ▶ Expansion point and number of moments m_d ▶ Number of moments for the bilinear reduction V_KMS = KMS(A, E, B, ω_m, n_guess, n_max, s_0, m_me) ▶ See Algorithm 1 V = V_KMS ▶ Initialize reduction basis for i = 0 : n_dist − 1 do ▶ Loop over all the distributed interfaces for k = 0 : m_d − 1 do ▶ Loop over all the moments if k = 0 then V_k = (A − s_eE)^−1D_iV_KMS V_k = REDRANGE(V_k, V ) ▶ Reduce range of the basis V_k else V_k = (A − s_eE)^−1ED_iV_k V_k = REDRANGE(V_k, V ) ▶ Reduce range of the basis V_k V = ORTH(V , V_k) ▶ Extend range of V with the basis V_k return V ``` **Numerical example: thermal FE model** The bilinear reduction approach presented in this section is further explained with a numerical example of a simple thermal FE model. The study case is the machine tool table illustrated in Figure 3.3. The simple geometry under consideration leads to an original system with a small number of DOF, i.e. 4157 thermal DOF. The small size of the original system facilitates the evaluation of its thermal response in order to compare it with the reduced system. In order to define the thermal model, the convective boundary conditions need to be defined. The convection to the environment is separated into two different areas, as shown in Figure 4.10. A value of HTC is associated to each of the convective boundary condition, namely HTC_top and HTC_bot. Similarly to the example in Section 3.2, the machine tool table is exposed to the following thermal loads: - Convection to the environmental temperature - Heat dissipated by the linear drive For the KMS reduction, an expansion point $s_e = 10^{-8}$ rad/s is selected in order to match the steady state response. For the KMS reduction, the highest frequency ($\omega_m$) included in the projection basis needs to be provided. Section 3.3 proposes an bound the error between reduced and original system. For this example, the frequency range of interest is up to $\omega_{max} = 0.01$ rad/s. In that frequency range the maximum error is set to $\epsilon = 0.05$. Therefore, according to Equation (3.39) $\omega_m$ is 0.04367 rad/s. The expansion point $s_e$ and the maximum considered eigenfrequency $\omega_m$ define the KMS basis $V_{KMS}$ without considering the distributed interfaces. The bilinear reduction extends the basis $V_{KMS}$ with the information about the parametric distributed interface. For this thermal model, the number of distributed interface $n_{dist}$ is 2. The number of moments for the bilinear reduction, $m_d$, is set to 2, which is deemed enough for the complexity of the model under consideration. In order to evaluate the performance of the reduction method, the FRF of the thermal response of the original and reduced systems are compared. The input of the FRF is the fluctuation of the environmental temperature. The output of the FRF is the temperature measured at the drive of the Y-axis. The FRF is evaluated for different values of the parameters describing the convective boundary conditions. The error between the reduced and the original system is calculated between $10^{-5}$ and 1 rad/s according to Equation (3.23) for the considered system input and output. Figure 4.11 depicts the relative error between original and the system reduced by means of the parametric KMS method. The error is negligible small at low frequencies, as the chosen expansion point matches the steady state response. At high frequencies the error increases reaching values close to 1. The reduced system succeeds in reproducing the thermal response of the system in the frequency range of interest, i.e. up to 0.01 rad/s. Additionally, Figure 4.11 shows that the error estimator of Equation (3.39) is an upper bound of the relative error for all frequencies. Figure 4.11 shows the reduction errors for one single combination of the input, environmental temperature, and output, mean temperature at the drive. Figure 4.12 extends the evaluation of the relative error of the reduced and original system to other input and output combinations, fixing the values of HTC$_{top}$ and HTC$_{bot}$ to 4 and $8 \frac{W}{m^2K}$ respectively. For the considered numerical example, the relative errors for different inputs and output combinations show more variability than the errors for Figure 4.11. The proposed error estimator remains an upper bound of the relative error for the FRF of the reduction error. The FRF of the relative error shows that the reduced system reproduces the thermal response of the original system at the frequency range of interest for different combinations of the HTC. In order to study further the thermal response of the reduced system, it is also interesting to compare its eigenfrequencies to the ones from the original system. Let $\omega_i$ be the $i$th eigenvalue of the system of Equation (4.24) and let $\tilde{\omega}_i$ be the $i$th eigenvalue of the system projected to the subspace spanned by the projection basis $V$ of Equation (4.34). The relative error for the $i$th eigenvalue can be defined as $$e_i = \left| \frac{\tilde{\omega}_i - \omega_i}{\omega_i} \right|$$ \hspace{1cm} (4.40) Figure 4.13 shows the relative error of the first 90 eigenvalues. The relative errors are evaluated for different combinations of the HTC, considering the same values as in Figure 4.11. The eigenfrequencies of the systems lie in the frequency range of interest, ranging from $7 \cdot 10^{-5}$ to 0.042 rad/s. The relative error of the eigenvalues increase at higher mode numbers, as illustrated in Figure 4.13. For the thermal FE model under consideration, the maximum relative error between the eigenvalues of the reduced and original system remain below $2 \cdot 10^{-5}$. This example shows that the reduced system accurately retains the same eigenvalues and frequency response of the original system for different combinations of values of the HTC. ![Figure 4.13: Relative error between the first 90 eigenfrequencies of the reduced and original system for different combinations of values of $\text{HTC}_{\text{top}}$ and $\text{HTC}_{\text{bot}}$ in $\frac{\text{W}}{\text{m}^2\text{K}}$](image) ### 4.3.3 Parametric reduction with a local reduction basis: switching boundary conditions The previous section presented a reduction method based on a global projection matrix. The main advantage of the global reduction basis that the reduced model is flexible to any parameter value. As a trade-off, it results in reduced systems of larger dimensions. Local projection matrices are an alternative to global methods and the focus of this section. Local reduction basis are created at some values of the model parameters. Therefore, the reduced system projected into the local subspace is only valid for the specific values of the parameters. The main advantage of these methods is that they result in reduced systems with a smaller number of DOF, at a cost of losing the flexibility to the change of the model parameters. This section focuses on the HTC as the main parameter to be traced. Equation (4.24) describes the system with $n_{\text{dist}}$ interfaces, each of them with a HTC $h_i$. The set of parameters $h_i$ can be sampled $n_s$ times, obtaining $n_s$ systems as $$E\dot{x}(t) = Ax(t) + \sum_{i=1}^{n_{\text{dist}}} h_i^t D_i x(t) + Bu(t)$$ \hspace{1cm} (4.41) where \( l \in [1, n_s] \). For each of the samples a new system matrix \( A_l \) can be defined as \[ A_l = A + \sum_{i=1}^{n_{dist}} h_i^l D_i \quad l \in [1, n_s] \] (4.42) For each \( A_l \), a new local projection matrix \( V_l \) can be created. Each projection matrix \( V_l \) projects the system into a subspace \( V_l \), creating \( n_s \) reduced system as \[ \tilde{E}_l \dot{\tilde{x}}_l(t) = \tilde{A}_l \tilde{x}_l(t) + \tilde{B}_l u(t) \quad l \in [1, n_s] \] (4.43) The local projection matrices create a set of \( n_s \) reduced system that are valid for a specific sample of the HTC. If the parameter values switch from one sample \( l \) to another sample \( l + 1 \), an interpolation between the two system is required. In principle, interpolating directly between the reduced states \( \tilde{x}_l \) and \( \tilde{x}_{l+1} \) is not possible. The projection matrices \( V_l \) an \( V_{l+1} \) are in different generalized coordinate systems. This would require performing the interpolation in the full system. However, reconstructing the full state \( x \) from \( \tilde{x}_l \) and \( \tilde{x}_{l+1} \) is computationally very expensive. Panzer et al. [94] proposed a coordinate transformation of the local reduction basis in order to enable the interpolation directly in the reduced system. The authors introduced a rotation of the subspace by a matrix \( Q \) such that the systems are in the same coordinate system. This is done by the Procrustes transformation. Firstly, the Procrustes transformation is introduced. Let \( V_1 \) and \( V_l \) be the matrix representation of a linear transformation \( V_1 \) and \( V_l \) respectively. The orthogonal Procrustes problem finds a rotation matrix \( Q_l \) that minimizes \[ \min \| V_1 - V_l Q_l \|_F \] (4.44) such that \( Q_l Q_l^T = I \), being \( \| \cdot \|_F \) the Frobenious norm. As explained by Golun and van Loan [45], the rotation matrix that minimizes Equation (4.44) is related to the SVD of the matrix \( V_l^T V_1 \) as \[ U_l^T (V_l^T V_1) W = \Sigma = \text{diag}(\sigma_1, \ldots, \sigma_r) \] (4.45) where \( \sigma_r \) are the singular values of \( V_l^T V_1 \). The product of the left and right matrices of the SVD provide the rotation matrix that minimizes the norm of Equation (4.44) such that \[ Q_l = U_l W_l^T \] (4.46) Once the Procrustes transformation is introduced, the rotation matrices \( Q_l \) can be applied to the local projection matrices \( V_l \). Taking \( V_1 \) as reference, the reduction matrices of the subsystems can be transformed as \[ \tilde{V}_l = Q_l^T V_l Q_l \] (4.47) After the transformation, the interpolation between the reduced systems spanned by \( \tilde{V}_l \) is enabled. The computational cost of the Procrustes transformation is small. It requires computing the SVD of \( V_l^T V_1 \in \mathbb{R}^{r} \), where \( r \ll n \) is the dimension of the reduced system. The Algorithm 4 shows numerical implementation of the reduction with local basis. Algorithm 4 Switch Reduction 1: procedure SWITCHREDUCTION($A_1, E, B, s_0, m_e, m_d$) 2: $E, B$ \hspace{1cm} \triangleright System matrices 3: $A_i$ \hspace{1cm} \triangleright System matrices with distributed interfaces $i \in [0, n_I - 1]$ 4: $\omega_m$ \hspace{1cm} \triangleright Maximum considered eigenfrequency 5: $n_{max}, n_{guess}$ \hspace{1cm} \triangleright Maximum number of modes and guessed number of modes below $\omega_m$ 6: $s_0, m_e$ \hspace{1cm} \triangleright Expansion point and number of moments 7: $V = KMS(A, E, B, \omega_m, n_{guess}, n_{max}, s_0, m_e)$ \hspace{1cm} \triangleright See Algorithm 1 8: for $l = 0 : n_I - 1$ do \hspace{1cm} \triangleright Loop over all the distributed interfaces 9: if $l = 0$ then 10: $V_i = KMS(A_i, E, B, \omega_m, n_{guess}, n_{max}, s_0, m_e)$ \hspace{1cm} \triangleright See Algorithm 1 11: else 12: $V_l = KMS(A_l, E, B, \omega_m, n_{guess}, n_{max}, s_0, m_e)$ \hspace{1cm} \triangleright See Algorithm 1 13: $U, W, \Sigma = SVD(V_l^T V_l)$ \hspace{1cm} \triangleright Singular Value Decomposition of $V_l^T V_l$ 14: $Q_l = U^T W$ \hspace{1cm} \triangleright Calculate Procrustes transformation matrix 15: $\tilde{V}_l = Q_l^T V_l Q_l$ \hspace{1cm} \triangleright Transform basis $V_l$ 16: return $V_0, \ldots, \tilde{V}_l, \ldots, \tilde{V}_{n_I-1}$ The reduction method with local reduction basis is applicable to any sample of the values of the HTC. The convective boundary conditions of mechatronic systems might switch from one value to another during operation. A switch in the boundary conditions is the transition between natural to forced convection. This sudden change of the convection might happen for instance when turning on the structural coolant, introducing cutting fluid into the working space or connecting a fan for mist extraction. The transition between discrete values of the HTC is an application case for the reduction method bases on local projection matrices. Numerical example: thermal FE model This section presents a numerical example in order to illustrate the reduction method with switching boundary conditions. The study case is the thermo-mechanical model described in Section 4.3.2 and depicted in Figure 3.3. The HTC parameters HTC$_{\text{top}}$ and HTC$_{\text{bot}}$ describe the convective boundary conditions at the top and bottom of the machine tool table. Similarly to the study case of Section 4.3.2, there are two different thermal inputs for this model, i.e. a source located at the linear drive and the environmental temperature. The numerical example describes a sudden change in the value of the HTC$_{\text{top}}$. The switch of the convective boundary condition represents a sudden modification of the flow of the surrounding fluid temperature. The switch boundary conditions might correspond, for instance, to the introduction of fluid media on the table during the cutting process. The switch of the boundary conditions requires the creation of two local reduction bases, $V_1$ and $V_2$. Each of the reduced bases represent the behavior of the system under different values of the boundary condition described by the parameter HTC$_{\text{top}}$. The bases are calculated independently by means of the KMS reduction method presented in Chapter 3. In order to enable the interpolation between the two reduced systems, the rotation matrix $Q_2$ needs to be calculated according to Equation (4.45) and (4.46). The rotation matrix $Q_2$ transforms projection basis $V_2$ into $\tilde{V}_2$, enabling the interpolation between the two reduced systems. The switching boundary condition leads to a linear time variant (LTV). In order to show the response of a LTV system, the transient response needs to be evaluated. For the numerical example under consideration, the following load case is considered: - Constant, homogeneous environmental temperature of 20 °C - Heat source at the linear drive of 20 W - After 12 h, sudden switch of the boundary conditions described by $HTC_{top}$. The HTC varies from 5 to 50 $\frac{W}{m^2K}$ Figure 4.14 shows the transient response of the temperature evaluated at the top of the table over. The simulation is represents the behavior of the system over 24 h, with a switching boundary condition occurs after 12 h. The change of the value of the boundary condition results requires the switch between the two reduced systems, whose projection matrices are $V_1$ and $\tilde{V}_2$ respectively. Figure 4.14 shows that the reduction algorithm presented in this section enables an interpolation between the two systems during the transition, enabling the computation of varying boundary conditions. ![Graph showing mean temperature at the top of the table over 24 h with switching boundary conditions](image) *Figure 4.14: Mean temperature at the top of the table over 24 h with switching boundary conditions* Software implementation This chapter focuses on the software implementation of the methods presented in the previous chapters. The methods developed in this work constitute the simulation software MORe, Model Order Reduction and more. Spescha [113] started the development of this simulation framework, considering static and structural dynamic effects. This chapter presents the extension of MORe to include thermo-mechanical effects. The software package is designed to offer an efficient workflow to create physical models of mechatronic systems. A graphical user interface (UI) as well as a comprehensive application programming interface (API) facilitates the model development and analysis of the results. These are the main features in MORe: - Automatized import of the model into MORe - Efficient model setup - Dedicated analyses for characterizing the behavior of mechatronic systems - Fast simulation integrating the developed MOR techniques - Interactive visualization of the results 5.1 Efficient model setup MORe is a simulation framework designed for efficient simulation of mechatronic systems. The software package is developed in Python [3], an object oriented programming language. The software offers a UI designed to facilitate the model setup. The Traits package [4] enables an efficient attribute and event handling. MORe also provides a comprehensive API, allowing the scripting of user specific tasks. The combination of a full-featured UI and scripting capabilities enable the user to handle complex modeling tasks. The first step in developing a virtual prototype is the FE-discretization of the model, creating the equations of the system. MORe relies on the commercial FE-software ANSYS [2] to deal with FE element discretization. ANSYS is one of the most used FE software both in academia and in industry. Figure 5.1 shows schematically the tool chain used. An ANSYS Mechanical macro (JScript) imports into MORe all the required information from ANSYS. The first task of the script is to import the geometrical information of the part. The second task of the script is to import the FE information by means of an ANSYS Parametric Design Language (APDL) macro. The APDL macro creates the system matrices of the thermal and mechanical system, including the thermo-mechanical coupling matrices. Then the APDL macro exports the information of the elements and nodes as well as the system matrix in Harwell-Boeing format. Once the required information is exported, the model setup continues in MORe. In order to understand the development of a MORe model, some terminology needs to be introduced. Figure 5.2 shows the schematics of a MORe model. Composition refers to the whole assembly, containing all the information defining the mechatronic system. The composition is an assembly of components, i.e. the different structural parts. The component contains the information about the thermal and the mechanical model. Figure 5.3 displays schematically the information contained in a component. For each of the models, the interfaces and the systems are defined. The interfaces define the inputs and outputs of each of the models. There are three types of interfaces in a thermal model: - Distributed interface refers to stationary interfaces over a large surface defining the convection to the environment. - Bushing interface refers to stationary interfaces. These interfaces can be used as to define an input (e.g. heat flux) or an output (e.g. temperature measurement point). They are not meant to be used to define convective boundary conditions. • Moving interface defines a non stationary input over a path (e.g. moving heat source or contact region). The definition of bushing and moving interface is similar for the mechanical model. Apart from defining the interfaces, the models also contain the information about the systems. The systems provide the system matrices that define the response of the model. A thermal model has the following systems types: • Original system provides the thermal system matrices of the full model, as imported from the FE-software. • Reduced system provides the system matrices after reduction as well as the reduction basis. The reduced system considers both the steady state and thermal dynamic response. A mechanical model has the following systems types: • Original system provides the mechanical system matrices of the full model • Reduced system provides the system matrices after reduction as well as the reduction basis. The reduced system consider both the mechanical static and dynamic response. • Reduced thermo-mechanical provides the system matrices, which couple the a thermal reduced system of the same component to the mechanical response. The reduced thermo-mechanical systems only consider the static part of the mechanical response, including the body loads due to temperature. • Rigid systems is a lumped parameter model with the degrees of freedom of a rigid solid. Following the schematics of a MORe composition of Figure 5.3, the next step consists in defining the connections between the components. That is the function of the links, connecting an interface of a component to an interface of another component. Additionally, a link can connect an interface to the 5 Software implementation ground. The term ground refers in MORe to the inertial system for mechanical models or environmental temperature for thermal models. The link properties define the behavior of the link. For mechanical links, the link properties refer to the stiffness and damping of the machine element. For thermal models, it refers to the TCC and HTC. In order to define the composition representing the mechatronic system, the kinematic configuration and the control system need to be defined. The kinematic configuration defines the direction of movement of the axis. The controllers define the location of the measurement system as well as the gains that defines the control system. 5.2 Thermo-mechanical analyses The previous section describes the process to set up a MORe model. All the information required to describe the behavior of the mechatronic system is contained in the composition. MORe offers dedicated analysis to study the thermal and mechanical response of the system. - Steady state analysis provides response of the mechatronic system after reaching the steady. Several combination of thermal loads (e.g convection or heat flux) or axes position can be evaluated in the same analysis. - Thermal frequency analysis provides the FRF, where the inputs are thermal loads. The outputs of the FRF can be structural temperatures or thermo-mechanical displacements. - Transient analysis determines the transient thermal response under thermal loads. Time dependent thermal loads can be defined as an input. The initial temperature can be a homogeneous value over the whole structure. The analysis also supports non-homogeneous initial conditions, which can be the result of a steady state simulation. Once the thermal response of the machine is evaluated, the thermo-mechanical displacements can be evaluated. The static analysis provides the mechanical response of the system under quasi-static loads. Forces or torques (e.g. gravity or preloads) can be applied the quasi-static loads. Additionally, these quasi-static loads can be body forces due to a temperature distribution that differs from the reference temperature. The temperature field resulting from a thermal analysis can be coupled to the mechanical system, providing the thermally induced displacements at any time of the thermal transient analysis or any load case of the steady state analysis. After the analysis is completed, the results can be evaluated in the postprocessor. The postprocessor provides full-featured animation of the simulation results. Mayavi [103] package enables the visualization of the geometry, structural deformation, and temperature distribution. Mayavi is an open-source, Python-based 3-D visualization package for scientific applications. The post-processor also offers different ways to interact with data, such as 2D plots, tables, and an exporter to MATLAB [1]. The simulation results can be stored independently of the composition. 5.3 Numerical implementation The developed MOR techniques are integrated in MORe, enabling the efficient evaluation of the thermo-mechanical response of mechatronic systems. This section describes some details on the numerical implementation included in MORe. During the reduction process, large systems of the linear equations of the type $Ax = b$ need to be solved. Due to their large dimension, solving these systems of equations is a computationally expensive process. Therefore, selecting the right algorithm for solving the system of equations affects directly the performance. There are two types of solvers, namely direct and iterative methods. Direct methods are mainly based on Gaussian elimination. They solve the system with a predefined number of operations, which is suitable for a small to midsize problem sizes. The main drawback of direct method is that they require large system storage, are hard to parallelize and show a slow performance for large systems. SuperLU [75], integrated in SciPy [68], provides a set of sparse direct solvers. For large systems, iterative methods are more suitable. Saad [104] provides an overview on iterative methods for sparse linear systems. Iterative solvers require less memory storage, are easier to parallelize, and need a few vector multiplications per iteration. The main disadvantage is that there is no guarantee of convergence and might have a slow convergence rate for ill-conditioned systems. In order to reduce the number of iterations, it is customary to use preconditioning, improving the condition number. For large systems, Incomplete LU decomposition incomplete LU decomposition (ILU) with generalized minimal residual (GMRES) iterative solver is efficient. Another important aspect of the numerical implementation is the selection of the numerical integration scheme for the system of ODE. The solver needs to have variable time steps, in order to adapt to variation of the thermal load data. Additionally, the solver needs to handle stiff systems of ODE. Hindmarsh [56] and the Lawrence Livermore National Laboratory developed a collection of ODE solvers, ODEPACK. The ODEPACK solvers are implemented in Fortran and SciPy provides a python wrapper. Among the different solvers provided in this package, this work uses LSODA [99]. LSODA can automatically switch from methods for stiff problems to methods suited for non-stiff problems. The Jacobian can be provided as an input to the solver speeding up the solution. LSODA is a solver with adaptive time step, in order to adapt to sudden changes of the transient loads. The transient data is provided at discrete time steps. In order to ensure a smooth behavior between the data points, cubic interpolation is considered. Chapter 3 and Chapter 4 introduce the MOR approaches for efficient simulation of thermo-mechanical models of mechatronic systems. The proposed reduction methods are implemented in a simulation software MORe, especially designed for an efficient simulation workflow. This chapter presents an efficient thermo-mechanical model of two study cases, applying the reduction methods and the software framework developed in the context of this work. Section 6.1 presents a thermal error model of a 5-axis machine tool to environmental temperature fluctuations. Section 6.2 focuses on the effect of internal heat losses in another 5-axis machine tool. Section 6.3 extends the thermo-mechanical model of the 5-axis machine tool to account for the effects of cutting fluid. 6.1 Thermal error model: environmental temperature fluctuations Section 6.1.1 presents the machine tool under investigation and describes in detail the thermo-mechanical model. Section 6.1.2 describes the validation of the thermo-mechanical model, comparing the simulated and measured thermal response of the machine tool to variations of the environmental temperature. The use of reduced models enables a large amount of evaluations of the model. This can be used to calculate the sensitivity of the outputs to the variation of model parameters. Once the model is validated, Section 6.1.3 introduces several analysis in order to understand the thermal behavior of the investigated machine tool. 6.1.1 Description of the thermo-mechanical model The machine tool under investigation in this section is a 5-axis milling machine with a swiveling axis, rotary axis, and horizontal spindle. The kinematic configuration according to ISO 10791-1:2015 [62] adapted for machining centers with vertical spindles is \[ V \left[ w C2' B' X' b Y Z (C1) t \right] \] Figure 6.1 shows the kinematic configuration of the machine tool. In order to investigate the thermal behavior of the machine tool, a thermo-mechanical model is developed in MORe, the software environment presented in Chapter 5. Table 6.1 summarizes the number of nodes and elements of the thermo-mechanical model. Due to the complex geometry of the machine tool bed, a large number of elements is required for the FE discretization. | Component | $N^o$ of nodes | $N^o$ of elements | |-----------------|----------------|-------------------| | Bed | 280373 | 174906 | | X-axis | 53189 | 21082 | | Y-axis | 68901 | 21752 | | Z-axis & Spindle| 21707 | 10927 | | B-axis | 11306 | 6121 | | C-axis | 22866 | 14160 | | Total | 458342 | 248948 | The machine elements, such as bearing and guideways, provide the connection between the machine tool components. The TCC is the parameter defining the heat flow from one to the other component. The values of the TCC are supplied for the manufacturers of the machine elements. In case of lack of availability of the data, the TCC of similar machine elements are assumed. The linear axes of the machine tool under investigation are actuated by direct linear drives. Between the stationary and moving part of the linear drives there is an air gap (AG), which transfers heat between the stationary and moving part. The parameter determining the heat transferred through this AG is the HTC of the gap, $h_{AG}$. Jang et al. [31] proposed a simplified thermal resistance model of the AG of a linear motor, as \[ \dot{Q}_{AG} = h_{AG} \cdot A_{AG} \cdot (T_1 - T_2) = \frac{1}{R_{AG}} \cdot (T_1 - T_2) \] \hspace{1cm} (6.1) \[ R_{AG} = \frac{1}{h_{AG} \cdot A_{AG}} = \frac{1}{A_{AG}} \left( \frac{1}{h_1} + \frac{t_L}{\lambda_L} + \frac{1}{h} \right) \] \hspace{1cm} (6.2) \[ \frac{1}{h_{AG}} = \frac{t_L}{\lambda_L} + \frac{2}{h} \] \hspace{1cm} (6.3) where \( h \) is the convective HTC between the stationary and moving part, \( t_L \) is the length gap, and \( \lambda_L \) is the thermal conductivity of the stationary air in the AG. Table 6.2 shows the values used for the evaluation of the HTC of the AG. The convective HTC \( h \) are calculated according to empirical formulas [44]. These correlations use a characteristic length, \( L \), defined as the ratio between the area of the surface and its perimeter. **Table 6.2: parameters describing the HTC, \( h_{AG} \), of the AG** | Axis | \( t_L \) [mm] | \( \lambda \) [W/mK] | \( L \) [m] | \( h \) [W/m²K] | \( h_{AG} \) [W/m²K] | |--------|----------------|-----------------------|--------------|-----------------|----------------------| | X-Axis | 2.2 | \( 26 \cdot 10^{-3} \) | 0.248 | 2.42 | 1.1 | | Y-Axis | 2 | \( 26 \cdot 10^{-3} \) | 0.197 | 2.52 | 1.5 | | Z-Axis | 4.8 | \( 26 \cdot 10^{-3} \) | 0.372 | 2.27 | 0.94 | In order to describe the thermal behavior of the machine tool, the cooling circuits of the different components need to be included in the model. The temperature of the cooling fluid is controlled in an external tank, with a constant set temperature. The pump pressurizes the fluid and distributes it in parallel to the different circuits. A large amount of fluid is pumped into the cooling system, ensuring a constant temperature over the whole cooling circuit. Therefore, the thermo-mechanical model assumes a fixed temperature on the surfaces of the cooling channels. Natural convection determines heat exchange between the machine tool structure and the environment. This is explained in further detail in Section 6.1.2. Once the thermal model is defined, the mechanical model is developed. The machine tool bed is fixed to the inertial frame by four supports. One of the supports is completely fixed, with a finite linear and rotational stiffness. The other three supports are only fixed in vertical direction. The values of the stiffness are provided by the machine tool manufacturer and summarized in Table 6.3. The guideways and bearings connect mechanically the different components. They provide a stiffness in every direction except for the direction allowing the movement (axial direction for the guideways and rotational direction for the bearings). Table 6.3 states the stiffness values provided by the specifications of the manufacturer. The model includes the control system, which is responsible for keeping the axes at the commanded position. The mechanical system responds quasi-statically to the variations in the temperature field. Therefore, the linear drives correct instantaneously the position of the linear axes according to the measured values of the glass scale. Once the definition of the thermo-mechanical model is completed, each of the components of the composition is reduced according to the MOR methods of Section 3.2 and 3.4. The parameters for the KMS of the thermal system are - Expansion point \( s_0 = 10^{-8} \text{ rad/s} \) Table 6.3: Stiffness at the mechanical links [Pa] | Link | Axial | Transversal | Normal | Roll | Pitch | Yaw | |--------------------|-------|-------------|--------|------|-------|-----| | Fixed Support | $1 \cdot 10^9$ | $1 \cdot 10^9$ | $1 \cdot 10^9$ | $1 \cdot 10^9$ | $1 \cdot 10^9$ | $1 \cdot 10^9$ | | Floating Support | 0 | 0 | $1 \cdot 10^9$ | 0 | 0 | 0 | | Rail - Carriage | 0 | $1 \cdot 10^9$ | $2.4 \cdot 10^9$ | $3 \cdot 10^5$ | $3 \cdot 10^5$ | $3 \cdot 10^5$ | | Bearing B & C | $1 \cdot 10^9$ | $1 \cdot 10^9$ | $1 \cdot 10^9$ | 0 | $1 \cdot 10^9$ | $1 \cdot 10^9$ | | Motor B & C | 0 | 0 | 0 | $8 \cdot 10^7$ | 0 | 0 | - Maximum considered eigenfrequency $\omega_m = 0.001$ rad/s - Maximum error between reduced and original system $\varepsilon = 0.05$ according to the error estimator of Equation (3.39). Considering the error estimator of Section 3.3, the KMS reduction leads to a thermal system with 194 DOF from the 408,535 DOF of the original system. In order to characterize the response of the machine tool under investigation, a thermo-mechanical reduced model is created. The thermo-mechanical coupling uses an expansion point $s_0 = 30$ rad/s, in order to capture the static mechanical behavior of the system under the thermal loads. The resulting assembled mechanical model has 838 DOF from the 1,223,129 of the original system. The thermal and thermo-mechanical reduced models shorten significantly the time required to evaluate the model, enabling applications requiring a large amount of model validations. This has a great importance in facilitating the validation process presented in Section 6.1.2. 6.1.2 Validation of the thermo-mechanical model In order to validate the model, the predicted TCP displacements are compared to the measured values. The first part of this section presents the measurement setup and the thermal load case under consideration. The second part of this section performs a sensitivity analysis in order to investigate the influence that the model parameters have on the thermal displacements. Once the sensitivity of the model is introduced, the third part of this section searches the values of the parameters within a certain range of physical relevance that minimize the difference between the measured and simulated thermo-mechanical response. Thermal error measurement In order to validate the thermo-mechanical model, the thermal response of the machine tool is measured. The ISO 230-3 [64] proposes a measurement setup to characterize the thermal response of machine tools at one position of the working space. Figure 6.2 illustrates the ISO 230-3 measurement setup, which evaluates the relative deviation between a mandrel attached to the spindle and the fixture bolted to the machine tool table. A set of 5 linear displacement sensors measure the displacements between the parts. The machine tool is powered and the control places the axes at the commanded position. This section focuses on the response of the machine tool to fluctuations in the environmental temperature. Changes in the environmental temperature stand out as one of the main external thermal error sources limiting the precision of machine tools. Great efforts are devoted to mitigate these effects, such as acclimatization of production facilities. This results in higher energy demands in order to achieve the precision requirements. Therefore, investigating the response of machine tools to environmental temperature fluctuations is key to improve the overall thermal behavior. For the purpose of investigating environmental effects, this section presents the thermal response of the machine tool to a step of the environmental temperature. Figure 6.3 shows the considered load case, the step response over 24 h inside a temperature controlled room. As illustrated in Figure 6.3, there is a spatial variability of the environmental temperature. The air temperature is measured outside the machine tool housing at 3 different locations at increasing heights, namely ground, bottom and top. The higher the measured air temperature, the faster it reacts to changes. Figure 6.3 also depicts the air temperature inside the machine tool housing, namely machine room (MR), affecting all the surfaces covered by the enclosure. The cooling fluid is provided to the different cooling circuits by an external tank. The tank has a temperature control unit which is set to 22 °C over the whole measurement time. This temperature data is used as input for the thermo-mechanical model, describing the air and cooling fluid temperature for the different convective boundary conditions. **Sensitivity analysis** The model depends on several physical parameters describing, for instance, the material properties or thermal boundary conditions. The values of these parameters are obtained from different literature sources, such as stiffness values of bearings from the specifications of the manufacturer or empirical correlations for the values of the HTC. However, there is an uncertainty associated with the values of the physical parameters, which are not a deterministic value. Therefore, the intrinsic uncertainty of the physical parameters of the model needs to be considered in order to understand the thermo-mechanical model. The thermo-mechanical model investigates the thermal behavior of the machine tool under fluctuations of the environmental temperature, as illustrated in the load case of Figure 6.3. The convective heat exchange determines the thermo-mechanical response of the system. Thus, this section focuses on investigating the parameters describing convection, i.e. HTC. The convective heat exchange represents the heat transfer between the structure and the surrounding fluid media. The thermo-mechanical model introduces these effects by means of the Robin boundary condiFigure 6.3: Load case for validation: drop of the environmental temperature. MR stands for machine room temperature tions, stated in Equation (3.3), or distributed interfaces, according to the MORe terminology introduced in Section 4.3. The Robin boundary condition approximates a complex interaction between the two media. This approximation can be considered a linearization of the heat transfer between the fluid media and the structural parts. Empirical correlations [44] as well as meta models based on CFD simulations [97] provide a first approximation to the values of the parameters describing the convective heat transfer. However, the values of the HTC depend on the conditions of the airflow inside and outside the machine tool enclosures. These flow conditions are too complex to be assessed and monitored. Therefore, there is a large variability associated to the values of the HTC. In order to deal with the large variability of the HTC, the sensitivity of the model outputs to changes of the values of the HTC is investigated using sensitivity analysis. The sensitivity analysis can provide the boundary conditions with a larger impact on the thermally induced displacements. The results of the analysis can be used to devote further efforts in constraining the values of the HTC associated to the most sensitive boundary conditions. Saltelli et al. [107] describes the sensitivity analysis in 5 steps: model setup, definition of the outputs of the model, definition of the uncertainty of the model parameters, Monte Carlo simulation, and evaluation of the sensitivity. The first step is to set up the model, which is described in detail in Section 6.1.1. The convective boundary conditions are defined as distributed interfaces in the MORe terminology introduced in Section 4.3. The discretization of the convective heat exchange into $n_{dist}$ distributed interfaces accounts for the different air flows inside and outside the machine tool housing. Equation (4.24) is the parametric state space representation of the thermal system with the $n_{dist}$ distributed interfaces, where the parameters are the HTC, i.e. $h_i$. The TCP displacements can be considered the mechanical output of the system, $\eta_{mech}(t)$ according to Equation (3.67). The original model is reduced using the MOR approach presented in Section 4.3.2. The reduced system allows the efficient calculation of the thermal response for different values of the parameters of the HTC. For the reduction methods, the following numerical parameters are used: 6.1 Thermal error model: environmental temperature fluctuations - Expansion point $s_0 = 10^{-8}$ rad/s - Number of distributed interfaces $n_{dist} = 13$ - Number of moments for the bilinear reduction $m_d = 2$ The second step focuses on the determination of the outputs of interest for the sensitivity analysis. The thermo-mechanical model predicts the position and orientation errors at the TCP relative to the workpiece. One alternative is evaluating the sensitivity for every time step of the outputs. Sampling the thermal response at every time step for all the different combinations of the HTC is a computationally expensive process, even with a reduced system. This work opts for evaluating only the difference between the initial and the final state, representing the response of the system to the step of the load case of Figure 6.3. It is assumed that if the difference between the initial and final state is sensitive to a certain parameter, the transient response of the system is also sensitive. Before proceeding with the sensitivity analysis, the outputs of the system need to be formalized. For a given thermal frequency the transfer function can be evaluated according to Equation (3.14). For the low frequency range the thermal outputs can be evaluated as $$y_{th} = -CA^{-1}Bu \quad (6.4)$$ Considering the linearity of the model, the difference between the initial and the final state can be directly evaluated considering the input vector $u$ as the difference between the initial and the final input. For the mechanical response, the output matrix $C_{mech}$ defined in Equation (3.67) is included. Thus, the mechanical output of the system can be evaluated as $$y_{mech} = -C_{mech}A^{-1}Bu \quad (6.5)$$ The next step of the sensitivity analysis is the evaluation of the uncertainty of the model parameters. The correlations available in the literature for natural convection provide values between $1$ to $10 \frac{W}{m^2K}$ for the surfaces of the structure of the machine tool. Therefore, a uniform distribution between these two values is assumed for the sensitivity analysis. The following step is to sample the parameter space according to the distribution of the values of the parameters of the model. Several sampling techniques are available in the literature, such as Monte Carlo sampling or Latin Hypercube. In this work, a Monte Carlo sampling considering the uncertainty of the input parameters is performed. The parameter sample is obtained from UQLab, an uncertainty evaluation software implemented in Matlab by Marelli and Sudret [77]. The dimension of the parameter space to be sampled is 13, which corresponds to the number of HTC defining the convective boundary conditions of the thermo-mechanical model. The model is evaluated $10^4$ times with different combinations of the HTC. The size of the parameter sample is considered large enough to evaluate the response of the model over the whole parameter space. The output of the models are calculated according to Equation (6.5), providing the TCP displacements relative to the workpiece in X-, Y- and Z-direction. Once the Monte Carlo simulation is performed, the response of the model for the different combinations of the values of the HTC can be evaluated. The scatter plot of Figure 6.4 represents the response of the system for the different $10^4$ combinations of the HTC. The scatter plots are a useful tool to understand the variability of the outputs to changes in one parameter. The horizontal axis represents the variation of one of the parameters of the model, namely the HTC of the machine room (MR). The vertical axis represents the simulated outputs, the thermal displacements in Y-direction. The increase of the HTC of the MR results in a positive trend of the thermally induced displacements in Y-direction. This suggests that the convection inside the enclosure has a direct effect on the thermal displacements in Y-direction. ![Scatter plot of the thermal displacements of the Monte Carlo simulation. The output of the system is the displacements in Y-direction for different values of all HTC of the thermo-mechanical model. The scatter plot shows the variation of the displacements in Y-direction for values of the HTC of the MR](image) *Figure 6.4:* Scatter plot of the thermal displacements of the Monte Carlo simulation. The output of the system is the displacements in Y-direction for different values of all HTC of the thermo-mechanical model. The scatter plot shows the variation of the displacements in Y-direction for values of the HTC of the MR In order to facilitate the visualization of the scatter plot, the average value and standard deviation of the outputs of the Monte Carlo simulation can be evaluated. Figure 6.5 illustrates the mean value and the standard displacements of the displacements in X-, Y- and Z-direction. The mean value and standard deviation are calculated in steps of $0.1 \frac{W}{m^2K}$ for two of the HTC describing the convection with the environment, i.e. MR and electrical cabinet (EC). On one hand, Figure 6.5 shows that the modification of the value of HTC of the MR results in a significant change of the displacements values in Y- and Z-direction. On the other hand, Figure 6.5 illustrates that the convective heat exchange inside the EC does not affect the thermally induced displacements in Y- and Z-direction. The results of Figure 6.5 do not show a significant variation of the displacements in X-direction with variations of the HTC of the MR or the EC. This suggests that other convective boundary conditions might have a greater impact on the displacements in this direction. The plots of Figure 6.5 can be extended to illustrate all the combinations of the considered model parameters and model outputs. The scatter plots of Figure 6.4 and the average and standard displacements of Figure 6.5 visualize the changes in the model outputs with variations of the input parameters. However, these representation of the Monte Carlo simulation do not provide a quantitative estimation of the sensitivity of the outputs to the model parameters. Therefore, the next step is to determine the sensitivity indices to rank the model parameters in order of importance. A global sensitivity analysis (GSA) investigates how much the simultaneous modification of the input parameters affect the results of the model. The GSA explores the sensitivity of the model in the whole parameter space. This is the opposite to local sensitivity analysis, which investigates the variation of the model outputs for modification of the input parameters in the vicinity of their nominal values. This work opts for the Sobol sensitivity index [112, 111]. The Sobol indices, based on the evaluation of the variance of the model outputs, are a widely accepted method to evaluate the sensitivity of physical models (see Iooss and Lemaître [61]). Let $Y$ be the output of a model and $X_i$ the $i$ uncertain input of the model, the first order sensitivity index $S_i$ of the parameter $i$ is Figure 6.5: Mean value and standard deviation of the results of the Monte Carlo simulation. The outputs of the system are the TCP displacements relative to the workpiece in X-, Y-, and Z-direction. The model parameters are the HTC of the electrical cabinet (EC) and machine room (MR). \[ S_i = \frac{Var(E(Y|X_i))}{Var(Y)} \] \hspace{1cm} (6.6) where \( Var(\cdot) \) stands for the variance and \( E(\cdot) \) the expected value. \( E(Y|X_i) \) stands for the mean value of the conditional probability distribution of \( Y \) given a value of \( X_i \). The first order sensitivity index \( S_i \) describes the isolated influence of the parameter \( X_i \). The sensitivity indices of higher order describe the interactions between the parameter \( X_i \) with all other parameters. In order to quantify the effects of all the interaction terms between the different parameters of the model, the total sensitivity index \( S_{T_i} \) can be defined. \( S_{T_i} \) describes the influence of the parameter \( X_i \) considering the interactions with all other parameters, i.e. \( X_{\sim i} \). The total sensitivity index \( S_{T_i} \) is defined as \[ S_{T_i} = 1 - \frac{Var(E(Y|X_{\sim i}))}{Var(Y)} \] \hspace{1cm} (6.7) The Sobol indices can be calculated directly from the results of the Monte Carlo simulation. As explained by Sudret [116], the Sobol indices can also be calculated from a polynomial chaos expansion (PCE) meta model. The PCE uses the information of the Monte Carlo simulation to derive a meta model and evaluates analytically the associated sensitivity indices. The UQLab [77] provides dedicated functions to create the PCE as well as to evaluate the Sobol sensitivity indices. Figure 6.6 illustrates the total Sobol indices of all the parameters considered in the Monte Carlo simulation for the thermal displacements in X-, Y- and Z-direction. For the model under investigation, the difference between the total and the first order Sobol indices is negligible. This implies that the effect of the interaction and higher order terms of the Sobol decomposition are not as influential as the first order interaction. Therefore, for the subsequent sensitivity analysis only the total Sobol indices defined in Equation (6.7) are considered. The results of the sensitivity analysis of Figure 6.6 shows that 3 to 4 parameters are the most relevant to describe the thermal response in each direction. As already shown in Figure 6.4, the output in Y- and Z-direction is sensitive to the variation of the HTC of the MR. The sensitivity analysis presented in Figure 6.6 determines which model parameters do not have an effect on the model outputs. For the model and the load case under consideration, the HTC of the EC, machine bed ground, B-axis, C-axis, sensor holder, mandrel and emergency brakes do not have a relevant contribution to the outputs. Thus, an accurate estimation of the value of these parameters does not have to be considered. Only the HTC with a significant effect on the model outputs are investigated further in detail in the next section. **Parameter identification** The previous section introduces the sensitivity analysis, which determines the most relevant parameters to describe the thermal response of the machine tool to fluctuations in the environmental temperature. The next step of the validation process requires identifying the set of HTC that provides a better match of the model to the measured thermal response. The results of the sensitivity analysis states that 6 out of the originally 13 HTC of the model are relevant to describe the thermo-mechanical behavior of the model. Similarly to the sensitivity analysis of the previous section, a Monte Carlo simulation is carried out with the six parameters with the highest sensitivity indices. The model is evaluated \( 10^5 \) times for different combinations of the remaining HTC. The parameters are varied within a range between 1 to \( 10 \frac{W}{m^2K} \). For each sample of the parameters, the model provides the values of the TCP displacements in X-, Y- and Z-direction relative to the workpiece. For a sample \( l \) of the Monte Carlo simulation, the simulated outputs are \( x_s, y_s, \) and \( z_s \), according to Equation Figure 6.6: Total Sobol sensitivity index. The outputs of the system are the TCP displacements relative to the workpiece in X-, Y-, and Z-direction. The model parameters are the HTC. The simulated outputs are compared with the measured outputs, designated as $x_m$, $y_m$, and $z_m$. The root mean square error (RMSE) between measurement and the simulation for the parameter sample $l$ is $$E_{RMSE}^l = \sqrt{(x_m - x_s^l)^2 + (y_m - y_s^l)^2 + (z_m - z_s^l)^2}$$ \hspace{1cm} (6.8) The parameter identification is based on choosing the parameter set $l$ that minimizes $E_{RMSE}^l$. Table 6.4 shows the identified values of the HTC for the convective boundary conditions that have the largest impact on the thermal displacements according to the sensitivity analysis of Figure 6.6. Table 6.5 provides the simulated and measured difference between the initial and the final state with the parameter set that minimizes the RMSE. It can be observed that the difference between the simulated value remains below 0.06 µm. **Table 6.4:** Identified values of the HTC for the most sensitive convective boundary conditions | Convective boundary condition | HTC [$\frac{W}{m^2K}$] | |-------------------------------|------------------------| | Bed MR | 8.1 | | Bed Top | 8.4 | | Bed Bottom | 5.2 | | X | 3.4 | | Y | 2.2 | | Z | 1.1 | **Table 6.5:** The simulated ($x_s$) and measured ($x_m$) difference between start and end point of the transient | Direction | Measured [µm] | Simulated [µm] | Difference [µm] | |-----------|---------------|----------------|-----------------| | X | 1.43 | 1.45 | 0.02 | | Y | 4.60 | 4.65 | 0.05 | | Z | 3.43 | 3.37 | 0.06 | The identification of the HTC focuses on the difference between the initial and the final state of the environmental step response. However, the thermal transient behavior is the most relevant in order to validate the thermo-mechanical model. The transient response shows the time constant of the machine tool and determines its stability under variation of the environmental temperature. Figure 6.7 and 6.8 compare the simulated and measured transient response over 24 h for the load case illustrated in Figure 6.3. The thermo-mechanical model with the identified values of the HTC succeeds in representing the measured transient linear and angular displacements. Figure 6.7 and 6.8 show that the model is validated, capturing the interactions between the machine tool and the environment. The main discrepancies between the simulated and measured thermal response are the transient response in X- and Z-direction. One reason for these differences is an insufficient level of detail in the discretization of the convective boundary conditions. The convective heat exchange of many structural parts are modeled with a homogeneous value of the HTC. The discrepancies in the transient response between the model and the measured data can be also attributed to measurement uncertainty. The measured thermal displacements have an associated uncertainty, especially in B-direction. Moreover, the temperature sensors used to measure the temperature of the environment and the cooling fluid, shown in Figure 6.3, have an intrinsic uncertainty. These temperature signals are used as inputs for the thermo-mechanical model and thus contribute to the overall uncertainty of the model. As illustrated in Figure 6.7, the linear displacements remain below 5 µm for variations of the environmental temperature of over 4 °C for the current position of the linear axes. Therefore, the cooling systems succeed in damping the effects of the variations in the environment. The machine tool under investigation has a small sensitivity to changes in the surrounding temperature. This makes the validation process more challenging, as the model needs to match small values of thermally induced displacements. Figure 6.7 shows that the deviation in Y- and Z-direction are dominant for the considered load case. A part of the displacements in Z- and Y-direction is compensated by the direct measurement system. Section 6.1.3 investigates in further detail the thermo-mechanical behavior of the machine tool using the results provided by the validated model. The angular displacements in B-direction depicted in Figure 6.8 are another interesting aspect to discuss. The variation of the environmental temperature over 4 °C results in an angular deviation between TCP and workpiece below 11 µm. The lack of accessibility of the coolant to certain structural parts leads to an inhomogeneous temperature field in the tool-sided axes. Therefore, the changes in the environmental temperature result in a variation of the squareness of the Z-axis to the Y-axis. These effects lead to the angular errors at the TCP relative to the workpiece in B-direction. The measured angular displacements in A-direction are too small to be captured with the measurement setup. Thus, the angular errors in A-direction are not considered in the validation of the thermo-mechanical model. The displacements in A-direction are also excluded from the evaluation of the performance of the machine tool in Section 6.1.3. ![Figure 6.7: Comparison of the measured (full line) and simulated (dashed) transient response in X-, Y- and Z-direction for the load case of Figure 6.3](image) In order to proceed with the validation of the thermo-mechanical model, the measured and simulated thermal response to another load case is evaluated. Figure 6.9 shows the considers load case, representing a step increase over 4 °C of the environmental temperature. Similarly to the previous case, the spatial vertical stratification of the temperature of the room can be observed with 3 different sensors, namely ground, bottom and top. The machine tool enclosure damps variation of the air temperature, as depicted Figure 6.8: Comparison of the measured (full line) and simulated (dashed) transient response in B-direction for the load case of Figure 6.3 in Figure 6.9. The cooling control temperature is also set to a constant 22 °C over the 24 h. Figure 6.10 and 6.11 compares the measured and simulated linear and angular displacements originated by the load case of Figure 6.9. The thermo-mechanical model uses the same set of identified HTC as for the previous load case. The linear displacements in Y- and Z-direction dominate the transient response also for this load case. The good agreement in trends and absolute displacements between prediction and observation confirms that the developed thermal model captures the main interactions between the machine tool and the surrounding environment. Figure 6.11 depicts the measured and the simulated angular displacements in B-direction. Similarly to the linear displacements, the model succeeds in matching the trends and absolute values of the of the measured thermo-mechanical behavior. The angular displacements in B-direction also play an important role in the transient response to the step variation of the environmental temperature. 6.1.3 Evaluation of the thermo-mechanical response to environmental influences The previous section presents the validation of the thermo-mechanical models comparing the simulated and the measured thermal response. This section introduces analysis tools that enable a comprehensive investigation of the thermal behavior of machine tools. These analyses facilitate the evaluation of the thermo-mechanical behavior of the 5-axis machine tool using the thermal model validated in Section 6.1.2. In order to understand the thermal design of the machine tool under investigation, it is interesting to analyze the effect of the thermal inputs on the outputs of the model. The Equation (3.14) and (3.68) state the FRF of a thermo-mechanical model, which describes in frequency domain how changes in the thermal inputs affect the system outputs. Figure 6.12 shows the frequency response of the TCP displacements relative to the workpiece originated by changes in the environmental temperatures. The input frequency ranges from $10^{-6}$ to $10^{-3}$ rad/s in this analysis. The FRF can be interpreted as the thermal response of the machine tool to a homogeneous variation of the environmental temperature. The frequency response is a useful analysis tool to determine the time constants of the different outputs to the inputs. The vertical line 6.1 Thermal error model: environmental temperature fluctuations Figure 6.9: Load case for validation: raise of the environmental temperature Figure 6.10: Comparison of the measured (full line) and simulated (dashed) transient response in X-, Y- and Z-direction for the load case of Figure 6.9 in Figure 6.12 indicates the frequency associated to a periodicity of 24 h. This frequency has a special interest, as it is the periodicity in many of the industrial workshops associated with the day-night cycle. Figure 6.12 shows that the displacements in X- and Z-direction react quasi-statically to the changes of the environment with periodicities of 24 h. However, the gain of the displacements in Y-direction is reduced for inputs with a frequency associated to the 24 h periodicity. The fact that the displacements in Y-direction do not follow quasi-statically the inputs results in a delay between the input signal and the output signal, which needs to be accounted for in the thermal error compensation strategies. Figure 6.11: Comparison of the measured (full line) and simulated (dashed) transient response in B-direction for the load case of Figure 6.9 Figure 6.12: FRF response of the 5-axis machine tool. Input: homogeneous environment temperature. Output: TCP displacements relative to the workpiece for the X-, Y- and, Z-direction Additionally, the frequency response can be used to evaluate the thermo-mechanical behavior of the machine tool to changes in the cooling temperature. Figure 6.13 illustrates the frequency response, being the input of the FRF the cooling temperature of all the cooling channels and the outputs the TCP displacements in X-, Y- and, Z-direction. The frequency range considered for this analysis is $10^{-6}$ to $10^{-3}$ rad/s. The frequency response shows that the TCP displacements are sensitive to changes in the temperature of the cooling fluid, in particular in Z-direction. Therefore, it is crucial that the temperature control of the cooling fluid ensures a constant temperature over time in order to reduce the magnitude of thermally induced TCP displacements relative to the workpiece. Figure 6.13 shows that the FRF of the displacements in Y-direction shows a maximum value for frequencies close to $1.5 \cdot 10^{-4}$ rad/s, corresponding to periodicities around 12 h. The maximum value of the FRF is also referred in the literature as thermal resonance frequency [79]. The reason for this behavior can be further investigated. For the Y-direction, the displacements of the tool-sided axes and the workpiecesided axes occur in opposite directions. At low frequencies, massive parts (e.g. bed) and lighter parts (e.g. table) react fast to changes in the inputs. The displacements of the tool-sided axes and the workpiece-sided axes compensate each other, as they have an opposite sign. This results in smaller values of the displacements. However, at higher frequencies the more massive parts slower to changes in the inputs, which is not the case of lighter parts. Therefore, the magnitude of the thermally induced deformations of more massive parts is reduced while the deformation of lighter parts remains the same. This effect results in an increase of the thermal displacements at higher frequencies, as shown in Figure 6.13 in Y-direction. The thermal resonance frequency is excited for any input signal with frequency content of $1.5 \cdot 10^{-4}$ rad/s, corresponding to a periodicity close to 12 h. This further justifies the need of ensuring a constant temperature of the supplied coolant in order to avoid these effects. ![Figure 6.13: FRF response of the 5-axis machine tool. Input: cooling fluid temperature. Output: TCP displacements relative to the workpiece for the X-, Y- and, Z-direction](image) The thermal FRF enables the visualization of the effect of the inputs on the outputs for the frequency range of interest. The frequency response illustrated in Figure 6.12 and 6.13 combines effects of the different boundary conditions into one input. However, the environmental temperatures affecting the different components of the machine tool are not the same in the case of the environmental temperature variation. Therefore, other analyses are required allowing the separation of the effects of the single boundaries to the model outputs. This motivates the definition of the thermal compliance matrix (TCM). The thermal transfer function is defined in Equation (3.14), where the output matrix $C_{th}$ represents the average temperature at certain points of the surface of the structure. Each of the columns of the input matrix $B$ represent the different boundary conditions to be analyzed. In principle, the TCM can be calculated for any excitation frequency $s$ of interest. However, in this section a low value ($s = 0$) is chosen in order to represent the steady state behavior. For the selected excitation frequency, the TCM, $M_{th}$, is then defined as $$M_{th} = -C_{th}A^{-1}B$$ \hspace{1cm} (6.9) The TCM can be extended in order to account for the mechanical response. Considering that the mechanical output matrix is $C_{mech}$, the thermo-mechanical TCM $M_{mech}$ is then defined as $$M_{mech} = -C_{mech}A^{-1}B$$ \hspace{1cm} (6.10) The main advantage of the TCM is that it enables the visualization of the effects of the different single boundary conditions on the outputs of the system. The TCM is different from the sensitivity analysis introduced in Section 6.1.2. The sensitivity analysis investigates how changes in the model parameters affects the outputs. The sensitivity is evaluated for a given, deterministic value of the inputs of the model, i.e. environmental temperature fluctuations or heat fluxes. The TCM determines the weight of the inputs on the value of the outputs. In principle, for every set of model parameters the effect of each input might vary, creating a new, different TCM. Figure 6.14 and 6.15 show the TCM for the boundary conditions of the environment and the cooling circuits respectively. The outputs of the TCM are structural temperatures located on different parts of the structure of the machine tool and the linear displacements of the TCP relative to the workpiece. The convective boundary conditions affecting the bed are top, bottom, MR, ground, and EC. The names of the other inputs refer to the axes or component they are affecting. In Figure 6.14 and 6.15, the colors visually highlight which inputs have the largest effects on the displacements. Figure 6.14 shows that the boundary conditions related with the bed (top, bottom, MR, and ground) and the linear axes have the largest effects on the displacements. The boundary conditions of the rotary axes and measurement system (mandrel and sensor holder) do not have a great impact on the resulting TCP displacements, even if they do affect the values of the structural temperatures. Figure 6.14 also illustrates an interesting behavior of the machine tool in Z-direction. On one hand, the variation of the environmental temperature on the Z-axis results in a negative Z-deviation, i.e. the tool and the workpiece get closer. On the other hand, the variation of the temperature of the environment surrounding the bed results in a deviation in positive Z-direction. This effect reduces the sensitivity of the machine tool design to changes in the environment. | Output | Bed Top | Bed Bottom | MR | Ground | EC | X | Y | Z | B | C | |--------|---------|------------|----|--------|----|---|---|---|---|---| | X [µm / K] | -0.727 | -0.731 | 0.4 | 0.354 | -0.054 | -0.09 | 1.269 | 0.126 | -0.002 | 0 | | Y [µm / K] | 0.812 | -0.834 | -2.658 | 0.109 | 0.009 | 0.667 | -0.27 | 0.076 | 0.187 | 0.077 | | Z [µm / K] | -1.265 | 0.038 | 3.077 | 0.881 | 0.131 | -0.724 | -0.424 | -0.854 | 0.014 | -0.086 | *Figure 6.14: The thermal compliance matrix (TCM). The inputs are the environmental temperatures of the different convective boundary conditions. The outputs are the thermally induced displacements in X-, Y-, and Z-direction* Figure 6.15 depicts the TCM for the thermal inputs related with the cooling systems of the structures. The boundary conditions Channels refers to the cooling circuit inside the machine tool bed. The other boundary conditions refer to the cooling circuits of the torque, linear, and spindle motors. The bed cooling channels provide a large amount of fluid that reaches most of the machine tool structure. Therefore, the TCP displacements as well as the structural temperatures are highly sensitive to variations of the temperature of the cooling fluid, as discussed also in Figure 6.13. The TCM also shows that the axial elongation of the spindle is greatly affected by variation of the temperature of the cooling. | Output | Bed cooling | X Motor | Y Motor | Z Motor | B Motor | C Motor | Spindle | |--------|-------------|---------|---------|---------|---------|---------|---------| | X [µm / K] | -2.551 | -1.449 | 1.115 | 0.706 | -0.263 | -0.012 | 0.959 | | Y [µm / K] | -2.44 | 0.87 | -0.753 | -0.17 | 2.857 | 1.499 | 0.065 | | Z [µm / K] | 9.934 | 0.14 | 0.63 | -0.387 | -1.571 | -0.364 | -5.851 | *Figure 6.15: The thermal compliance matrix (TCM). The inputs are the temperatures cooling fluid of the different convective boundary conditions. The outputs are the thermally induced displacements in X-, Y-, and Z-direction* The accuracy of the machine is defined by the thermal stability of the machine tool in the whole working volume. However, the TCM and the frequency response analysis introduced so far show the relationship between the inputs of the system (environment and cooling temperature) and the outputs of the system restricted to one point of the working space. Therefore, new analyses that evaluate the spatial, thermal behavior of the machine tool are required. For the machine tool under investigation, the variation over time of the positioning error $E_{ZZ}$ is investigated. In order to perform this analysis, the load case of Figure 6.3 is considered. In order to evaluate $E_{ZZ}$, the Z-deviation is evaluated every 50 mm along the Z-axis. One position along the stroke of the axis is taken as reference point to evaluate $E_{ZZ}$, according to ISO 230-2:2014 [65]. In addition, the interest lies in the variation of the positioning error over time. Therefore, initial positioning error is considered as the reference. Figure 6.16 illustrates the simulated values of the variation of $E_{ZZ}$ over 24 h along the stroke of the Z-axis. The outputs of the model are evaluated every 1 h, from the initial state in red to the final step in blue. Figure 6.16 shows that the response of the Z-axis to the variation of the environmental temperature is a reduction of the length of the axis, i.e. the axis becomes smaller. During the step response, the spindle is cooled to a constant temperature. This results in a smaller effect of the temperature drop of the structural deformation of the spindle. However, the direct measurement system is exposed to the environment, affecting directly the variation of the positioning error of the Z-axis. ![Figure 6.16: Variation of the positioning error $E_{ZZ}$ over the stroke of the Z-axis under the load case of Figure 6.3. The $E_{ZZ}$ is shown every 1 h from the initial time step (red) to the final (blue) over 24 h](image) The analysis tools presented in this section so far allows the designer to understand the thermo-mechanical behavior of the machine tool and improve its performance. The validation process presented in Section 6.1.2 uses a step response to determine the uncertain model parameters. However, the load case of Figure 6.3 does not represent a typical temperature profile of an industrial workshop. Therefore, another interesting analysis is the evaluation of the thermal response of the machine tool under an environmental temperature fluctuation representing a room without control temperature. Figure 6.17a illustrates a temperature profile with a peak-to-peak variation of 3.33 °C over 60 h. The fluctuation of the environmental temperature is applied homogeneously as a convective boundary condition to the whole structure. Figure 6.17b depicts the associated thermal displacements in X-, Y-, and Z-direction. For the load case under consideration, the thermally induced displacements remain below 6 µm over the 60 h of simulation. It is important to note that there is a time delay between the environmental temperature signal and the displacements at the TCP relative to the workpiece. This is clearly appreciated in the displacements in Y-direction, as already mentioned in the frequency response of Figure 6.12. Determining the delay beFigure 6.17: Thermally induced TCP errors for a homogeneous environmental temperature fluctuation over 60 h between the measured temperature and the displacements is important in order to develop robust thermal error compensation strategies. 6.2 Thermal error model: internal heat sources This section presents a model of the thermal response of a 5-axis machine tool to internal heat sources. Section 6.2.1 introduces the investigated machine tool and describes in detail the thermo-mechanical model. Section 6.2.2 presents the validation process, describing the thermal error measurement setup and thermo-energetic flows in the machine tool. Section 6.2.3 evaluates the thermal design of the machine using the validated thermo-mechanical model. 6.2.1 Description of the thermo-mechanical model The machine tool under investigation in this section is a 5-axis milling machine with a swiveling axis, rotary axis, and horizontal spindle. The kinematic configuration according to ISO 10791-1:2015 [62] adapted for machining centers with vertical spindles is \[ V \left[ w C2' B' b [Y1 Y2] X [Z1 Z2] (C1) t \right] \] The working volume is 730x510x510 mm, with a table of 500 mm of diameter. Figure 6.18 shows the geometry and FE-element mesh of the machine tool. The linear axes are arranged in a box-in-box design, ensuring the thermal symmetry of the machine tool. ![Figure 6.18: Model of the Mori Seiki NMV 5000 DCG in MORe](image) The machine tool enables turning operations by means of the rotation of the C-axis up to 1200 rpm. The rotation of the C-axis originates thermally-induced displacements, which are investigated in this section with a physical model. The physical model of the machine tool is created in MORe, the software platform presented in Chapter 5. According to the workflow of Figure 5.1, the first step for creating the thermo-mechanical model is the FE discretization of the geometry in a commercial FE-software. Table 6.6 summarizes the number of nodes and elements required for the thermo-mechanical model of Figure 6.18. The geometrical complexity of the structural parts of the bed, X- and, Y- axis leads to a FE-mesh with a large number of nodes. The second step of the model setup is the definition of the material parameters of each of the components. Table 6.7 details the material of each of the structural parts. Table 6.8 summarizes the material properties required to define the thermo-mechanical model, namely \( \rho \) density, thermal conductivity \( \lambda \), \( c_p \) heat capacity, \( \alpha \) thermal expansion coefficient, \( E \) Young modulus, and Poisson ratio \( \nu \). The next step required to setup the model is the definition of the thermal and mechanical connections Table 6.6: FEM mesh | Component | $N^o$ of nodes | $N^o$ of elements | |--------------------|----------------|-------------------| | Bed | 107217 | 55453 | | X-axis | 115954 | 64480 | | X-screw | 11434 | 1803 | | Y-axis | 100486 | 57203 | | Y-screws | 29110 | 4582 | | Z-axis & Spindle | 53597 | 24822 | | Z-screws | 21707 | 10927 | | B-axis | 62144 | 34707 | | C-axis | 15656 | 8353 | | Total | 508462 | 253275 | Table 6.7: Material properties | Material | Components | |----------------|-------------------------------------------------| | Steel | Z-axis, Spindle, Ballscrews, B-axis, and C-axis | | Cast iron | Bed, X-, and Y-Axis | Table 6.8: Material properties | Material | $\rho \left[ \frac{\text{kg}}{\text{m}^3} \right]$ | $\lambda \left[ \frac{\text{W}}{\text{m} \cdot \text{C}} \right]$ | $c_p \left[ \frac{\text{J}}{\text{kg} \cdot \text{C}} \right]$ | $\alpha \left[ \frac{1}{\text{s} \cdot \text{C}} \right]$ | $E \left[ \text{MPa} \right]$ | $\nu \left[ - \right]$ | |-----------------|-----------------------------------------------------|---------------------------------------------------------------|-------------------------------------------------------------|----------------------------------------------------------|--------------------------------|------------------------| | Cast iron | 7200 | 52 | 447 | $11 \cdot 10^{-6}$ | 110 | 0.28 | | Structural steel| 7850 | 60.5 | 434 | $12 \cdot 10^{-6}$ | 200 | 0.3 | between the different machine tool components. The TCC defines the heat flowing between two parts in thermal contact, e.g. the inner and outer race of a bearing. For the heat load under investigation in this section, the bearing of the C-axis is a significant heat source determining the thermal response of the system. Therefore, it is important to estimate the parameter defining the thermal heat transfer of the bearing of the C-axis. Weidermann [125] proposed an empirical correlation for the TCC of bearings based on the geometry and rotational speed. The TCC $\lambda_{TCC}$ in $\frac{\text{W}}{\text{K}}$ can be expressed as $$\lambda_{TCC} = Z \frac{d_b^2}{2400} \sqrt{1.4 + 2 \ln(v_p) - 2 \ln(d_b)}$$ \hspace{1cm} (6.11) where $Z$ is the number of rolling elements, $d_b$ is the diameter of the rolling elements in mm, and $v_p$ is defined as \[ v_p = \frac{d + d_p}{19099} n_r \] (6.12) being \( d \) the mean diameter of the bearing in mm, and \( n_r \) the rotational speed of the bearing in rpm. According to Weidermann [125], a value of \( v_p = 0.1 \) is considered for stationary bearings. The remaining thermal contacts are estimated with values from the literature. The convective boundary conditions are relevant in order to describe the thermo-mechanical behavior of the system. On one hand, the structural parts outside the working space and the environment exchange heat by means of natural convection. The parameters of the HTC for natural convection in open spaces can be calculated with empirical correlations [44]. On the other hand, the rotation of the C-axis modifies the air flow inside the machine tool enclosure, enhancing the circulation of the air inside the working space. The mechanical enhancement of the airflow results in a more efficient heat exchange between the air inside the enclosure and the structure, leading to forced convection. Cardone et al. [29] proposed a correlation for the Nusselt (\( Nu \)) of a rotating disk for turbulent flow corresponding to values the Reynolds (\( Re \)) number above 320,000 as \[ Nu = 0.0163 \cdot Re^{0.8} \] (6.13) The Reynolds number is defined as \[ Re = \frac{r^2 \cdot \omega}{\nu_0} \] (6.14) where \( \omega \) is the rotational speed in rad/s, \( r \) is the radius of the disk, and \( \nu_0 \) is the kinematic viscosity of the air. The Nusselt number is defined as \[ Nu = \frac{h \cdot r}{\lambda} \] (6.15) where \( h \) is the HTC and \( \lambda \) is the conductivity of the air. For the definition of the mechanical model, the stiffness values of the machine elements are required. These parameters are obtained from the data sheets of the suppliers. In case of lack of availability of the data, the stiffness values are estimated from similar machine elements. Appendix C shows the values of the stiffness of the machine elements. The machine tool has magnetic linear scales, keeping the linear axes in the nominal position. For the developed thermo-mechanical model, the measurement system of the linear axes is modeled considering a high stiffness in the direction of the movement in order to keep the axis in the desired position. If more information about the measurement system and the control is available, the model can be extended to include the whole mechatronic system. After defining the thermo-mechanical parameters describing the contacts between axes, a reduced system is created for each of the components. The distributed interfaces, as explained in detail in Section 4.3.1, represent the area where the convective boundary conditions are applied. In order to enable the variation of the HTC parameters after reduction, the bilinearization method in combination with the KMS reduction is introduced in Section 4.3.2. The numerical parameters for the reduction of the thermal systems are: - Expansion point \( s_0 = 10^{-8} \text{rad/s} \) • Maximum considered eigenfrequency $\omega_m = 0.01$ rad/s • Maximum error between reduced and original system $\epsilon = 0.05$ • Number of distributed interfaces with bilinear reduction $n_{dist} = 4$ • Number of moments for the bilinear reduction $m_d = 2$ The combination of the KMS and the bilinearization approach transforms the original thermal system of 508,462 DOF to a reduced system of 392 DOF. In order to evaluate the mechanical response, a reduced thermo-coupled system needs to be also created according to the methods presented in Section 3.4. For the thermo-mechanical coupled system, an expansion point at $s_0 = 30$ rad/s is chosen in order to capture the static mechanical response. The original mechanical model of 1,525,386 DOF is reduced to 1,164 DOF. The reduced model require 113 s for the transient thermal simulation of rotation of the C-axis over 3 h. Thus, reduced models facilitate the validation process. ### 6.2.2 Validation of the thermo-mechanical model #### Thermal error measurement In order to validate the thermo-mechanical model, the thermally induced displacements originated during the rotation of the C-axis are measured. The R-Test is an indirect volumetric measurement technique developed by Weikert [126] for geometric calibration of 5-axis machine tools. This measurement technique can also be applied for thermal error measurements of rotary axes of 5-axis machine tools. Ibaraki and Hong [60] pretested a measurement cycle for the evaluation of the error motions and, position and orientation errors of rotary axes. Gebhardt et al. [43] developed the concept of the discrete R-Test. The discrete R-Test evaluates the linear displacements between a sensor nest located on the spindle and the precision sphere located on the table at 4 different indentations of the rotary axis. With the information of the relative displacements between the TCP and sphere at 4 different position of the C-axis, the variation over time of the position and orientation errors of a rotary table can be evaluated, i.e. $E_{X0C}$, $E_{Y0C}$, $E_{Z0C}$, $E_{A0C}$, and $E_{B0C}$. Additionally, the radial growth $E_{R0T}$ of the table (T) as a functional surface can be evaluated. Blaser et al. [22] adapted R-Test discrete measurement with an on-machine measurement system. Instead of a sensor nest with displacement sensors, the measurement setup uses a 2.5D touch trigger probe. The touch probe triggers when contacting a precision sphere located on the table and the linear glass scales determine the relative position between the TCP and the sphere. Figure 6.19a shows the touch probe and the sphere used in this measurement setup. The reference sphere, which is made out of steel, is mounted on an aluminum support plate. The position of the center of the sphere is evaluated by measuring four points at the center of the equator of the sphere, providing the displacements in X- and Y-direction. The displacements on Z-direction are evaluated by measuring a single point on the mounting plate. This measurement cycle is repeated 4 times at different indentation of the C-axis, as shown in Figure 6.19b. The information captured at the four different indentation of the rotary table enables the evaluation of the position and orientation errors of the C-axis. This work uses the measurement setup proposed by Blaser et al. [22]. The main advantage of this measurement setup is that it is an on-machine measurement system. In comparison with the original R-Test setup, it does not require the calibration of the linear displacement sensors. However, the measurement resolution with the touch trigger probe depends on the resolution of the glass scales of the linear axes. For the investigated machine tool, the resolution of the measurement system of the linear axes is 1 µm. The time required for measurement cycle is 95 s, which is carried out every 5 min over the whole experiment time. **Thermo-energetic measurement** For the validation of the thermo-mechanical model, a quantification of the heat losses at the different machine tool elements is required. Mohammadi et al. [84] developed a thermo-energetic model of the investigated machine tool. The authors presented a model to predict the different energy flows, i.e. mechanical, electrical, hydraulic, and thermal, for different operational points. The model was developed in the EMod, a simulation framework developed by Züst [133] for thermo-energetic models. The thermo-energetic model of Mohammadi et al. investigated the energy flows during the manufacturing of a test piece, where the linear axes and the spindle were involved. The energy consumption due to the rotation of the rotary table was not considered in this study. Figure 6.20 illustrates schematically the energy flow during the rotation of the C-axis. The axis unit receives electrical power ($P_{ax}$) and is provided to amplifiers. The amplifiers rectify the supplied AC signal and provide a pulse width modulation (PWM) signal to the torque motor ($P_m$). Part of the input power $P_{ax}$ is lost in the amplifiers in the form of thermal energy ($\dot{Q}_{amp}$). The amplifiers are structurally disconnected from the structural parts and the heat is removed from the EC by the ventilation system. Thus, the heat dissipated by the amplifiers $\dot{Q}_{amp}$ is not considered for the thermo-mechanical model of the C-axis. The efficiency coefficient $\eta_{amp}$ defines the power supplied to the motor as $$P_m = \eta_{amp} \cdot P_{ax} \quad (6.16)$$ Considering the energy flow illustrated in Figure 6.20, the supplied electrical power to the motor $P_m$ can be decomposed as $$P_m = P_{mech} + \dot{Q}_{ag} + \dot{Q}_{vt} + \dot{Q}_{st} + \dot{Q}_b \quad (6.17)$$ where $\dot{Q}_{st}$ are the losses at the stator, $\dot{Q}_{rt}$ the losses at the rotor, $\dot{Q}_{ag}$ the losses at the AG, $\dot{Q}_b$ the losses at the bearing, and $P_{mech}$ the mechanical power required to rotate the axes. This energy balance is valid for rotary axis with direct transmission, as explained by Züst [133]. The mechanical power $P_{mech}$ can be expressed in terms of the required torque $M_{mech}$ and rotational speed $\omega$ as $P_{mech} = \omega \cdot M_{mech}$. The mechanical torque is $$M_{mech} = \dot{\omega} \cdot \Theta_{ax} \quad (6.18)$$ where $\Theta_{ax}$ is the inertia of the moving parts. For a constant rotational speed, the required mechanical power is zero, i.e. $P_{mech} = 0$. The losses at the AG can be expressed according to Saari [105] as a function of the shear stress $\tau_{ag}$ at the AG as $$\dot{Q}_{ag} = \pi \cdot \tau_{ag} \cdot \omega \cdot D_{ro} \cdot l_{ro} \quad (6.19)$$ Züst [133] developed a meta model to determine the shear stress $\tau_{ag}$ as a function of the rotational speed and the Reynolds number. For the dimensions of the motor of the axis under investigation and a maximal rotational speed of 1200 rpm, the losses at the AG are 1.3 W. Therefore, the heat losses at the AG are considered negligible in comparison with the other heat inputs to the system, i.e. $\dot{Q}_{ag} \approx 0$. The next term of Equation (6.17) is the heat losses of the rotor $\dot{Q}_{rt}$. The axis depicted in Figure 6.20 uses a torque motor, which is a type of motor with winding exclusively in the stator. Thus, heat losses do not occur in the rotor, i.e. $\dot{Q}_{rt} = 0$, and only the losses the stator need to be considered $\dot{Q}_{st}$. The previous consideration results in a simplification of Equation (6.17) where only two terms need to be determined, namely $\dot{Q}_{st}$ and $\dot{Q}_b$. The losses at the stator $\dot{Q}_{st}$ can be calculated in terms of the rotational speed $\omega$ and the current of the motor $I$, provided several characteristics of the motor are known. The losses at the bearings $\dot{Q}_b$ can be calculated provided the type of bearing, lubrication, preload, and rotational speed the heat losses of the bearings can be estimated, as explained by Harris and Kotzalas [52]. However, the characteristics of the torque motor and the required bearing information at the bearing is not available. Therefore, bearing losses $\dot{Q}_b$ are estimated considering the efficiency of the torque motor $\eta_{mot}$ as $$\dot{Q}_b = \eta_{mot} \cdot P_m$$ \hspace{1cm} (6.20) The heat losses at the stator can be calculated as $$\dot{Q}_{st} = (1 - \eta_{mot}) \cdot P_m$$ \hspace{1cm} (6.21) As illustrated in Figure 6.20, an internal cooling system removes part of the heat introduced by the machine elements. A pump located outside the machine tool working space supplies pressurized cooling fluid from a tank. An external unit controls the temperature of the fluid to a reference temperature provided by a sensor located in the machine tool bed. The heat removed by the structural cooling $\dot{Q}_{cool}$ is $$\dot{Q}_{cool} = c_p \cdot \rho \cdot \dot{V} \cdot (T_{out} - T_{in})$$ \hspace{1cm} (6.22) where $c_p$ is the heat capacity, $\rho$ is the fluid density, $\dot{V}$ is the volumetric flow, $T_{out}$ is the outlet temperature, and $T_{in}$ is the inlet temperature. Mohammadi et al. [84] measured the volumetric flow, which is 24 $\frac{\text{m}^3}{\text{min}}$. The heat capacity of the cooling fluid is 1600 $\frac{\text{J}}{\text{kgK}}$ and the density is 850 $\frac{\text{kg}}{\text{m}^3}$. Due to the limited information available in CAD data, the areas where the cooling is applied can only be estimated. The inlet and outlet coolant temperature are measured in order to quantify the heat removed by the cooling system of the C-axis. According to Equation (6.16), (6.20), and (6.21), the quantification of the heat losses at the machine elements requires the evaluation of the supplied power $P_{ax}$ to the axis. The energy demand of the axis can be measured according to ISO 14955-2:2018 [63]. Gontzar et al. [46] presented a multichannel measurement system to evaluate the energy consumption of machine tools. The power measurement device allows the visualization of the measured power signal for the different components. Figure 6.21 shows the energy demand of the machine tool for different rotational speeds of the C-axis. The electrical power is measured before the amplifiers separately for the C-axis and the unit supplying power to the B-axis, linear axes, and spindle. The power is measured after a constant rotational speed is achieved. Therefore, the power peak associated to the acceleration of the rotary table is not shown in Figure 6.21. The electrical power demand of the B-axis, linear axes, and spindle is around 750 W and stays constant for the different values of the rotation speeds. The power consumption of the C-axis, i.e. $P_{ax}$ in Equation (6.16), ranges from 504 W for standing still to 1977 W for 1200 rpm, which corresponds to the maximal rotational speed. **Parameter identification** Once the thermo-energetic flows of the machine are described, the thermo-mechanical model can be validated comparing the simulated and the measured thermal response. The investigated thermal load Figure 6.21: Energy demand of the axes of the machine tool for different values of the rotational speed of the C-axis case is the rotation of the C-axis to 1200 rpm over 3 h. Figure 6.22 shows the inlet and outlet temperature, which measured during the experiment. According to Equation (6.22), the difference between these two temperatures provides the cooling power. This temperature difference is used as an input for the model. In order to define the boundary conditions, the fluctuations of the temperature of the air needs to be considered. One one side, the workshop environmental temperature is an input for the thermo-mechanical model. During the validation measurement, the environmental temperature remains constant. On the other hand, the air inside the machine tool enclosure varies over time, as illustrated in Figure 6.22. The heat losses occurring during the rotation of the C-axis are transferred to the air inside the machine tool housing. The temperature of the air inside the working space rises over the course of the experiment 3.3 °C. The temperature increase stabilizes after 150 min of the rotation of the C-axis. The temperature data of Figure 6.22 are filtered in order to avoid numerical oscillations during the transient simulation associated to the limited resolution of the digital temperature sensors. Once the convective boundary conditions are defined, the heat losses of each of the elements need to be quantified. The efficiency of the torque motor $\eta_{mot}$ is assumed to be 0.3, which corresponds with the typical efficiency of torque motors of this type. The efficiency of the amplifier $\eta_{amp}$ is not fully known and therefore needs to be estimated. In order to estimate the value of the efficiency of the amplifier, the thermal response of the machine tool is evaluated for values of $\eta_{amp}$ ranging between 0.6 to 0.9. The simulated response for the different values is compared to the measured thermal response. The best agreement is found for a value of $\eta_{amp} = 0.63$ for the investigated rotational speed of 1200 rpm. Figure 6.23 shows the comparison of the measured and simulated displacements in X-, Y-, and Z-direction during the rotation of the C-axis over 3 h. The direction of the displacements are depicted according to the TCP coordinate system of Figure 6.18. The simulated thermal displacements are sampled every 5 min, which is the same as sampling frequency of the measured thermal displacements. The thermal displacements in X-direction remains constant over the rotation of the C-axis due to the symmetry of the machine tool design. The model shows a good quantitative agreement between the measured and simulated thermal displacements in Y- and Z-direction. The main discrepancies between the thermo-mechanical model and the measured data occur in the first 100 min in Z-direction. The measured response shows a slower time constant than the model. However, the difference between the simulation and the measurement does not exceed 9 µm over the 3 h. Thus, the thermo-mechanical model succeeds in capturing the response of the machine tool during the rotation at 1200 rpm. The thermo-mechanical model validated in Figure 6.23 shows the thermo-mechanical response of the machine tool for a single value of the rotational speed, i.e. 1200 rpm. The validation of the model is extended to two different rotational speeds, namely 900 and 600 rpm. The time for the validation is prolonged to 6 h, in order to ensure that the steady state is reached. The thermo-mechanical model uses the values of the power consumption for different rotational speeds shown in Figure 6.21. Furthermore, the model uses as input the measured MR temperature and the difference between the inlet and outlet temperature, which are shown in Appendix C. Figure 6.24 illustrates that the developed model captures the transient behavior and the steady state values of the displacements associated to the rotation of the Figure 6.24: Comparison of the measured (full line) and simulated (dashed) transient response in X-, Y- and Z-direction for the rotation of the C-axis at two rotational speeds over 6 h C-axis for two different rotational speeds. The comparison between the model results and the measured data of Figure 6.23 and 6.24 is performed at a constant rotational speed, allowing the required time to reach the steady state. However, the operation of the C-axis during turning operations might combine several rotational speed over a prolonged time. Therefore, the validation of the model is extended to a random speed profile over 6 h, shown in Figure 6.25. The rotational speeds range from 300 rpm to 1200 rpm, with steps every 1 h. The inputs of the model considers the measured energy consumption of the C-axis shown in Figure 6.21 as well as the measured MR temperature and cooling power, which are shown in Appendix C. Figure 6.26 depicts the comparison between the measured and simulated thermal displacements for the load case of Figure 6.25. Figure 6.23, 6.24, and 6.26 show that the developed thermo-mechanical model is validated and can be used to understand the thermal behavior of the investigated machine tool during the rotation of the C-axis. 6.2 Thermal error model: internal heat sources Figure 6.25: Speed profile of the C-axis over 6 h Figure 6.26: Comparison of the measured (full line) and simulated (dashed) transient response in X-, Y- and Z-direction for the rotation of the C-axis over 6 h with rotational speeds shown in Figure 6.25 6.2.3 Evaluation of the thermo-mechanical response to internal heat sources The validated model presented in Section 6.2.2 is used to analyze the thermo-mechanical behavior of the machine tool. The FRF describes in frequency domain the effect of the variation of the model inputs on the outputs of the system. Equation (3.14) and (3.68) state the transfer function of a thermo-mechanical model. Figure 6.27 shows the frequency response of the machine tool to internal heat losses. The input of the FRF is the energy provided the motor $P_{motor}$. As explained in Equation (6.20) and (6.21), the input power is divided into two different heat sources, i.e. $\dot{Q}_{st}$ and $\dot{Q}_b$. The outputs of the FRF of the Figure 6.27 are the displacements between the TCP and the workpiece in X-, Y-, and Z-direction. The thermal transfer illustrates that internal heat losses affect predominately the displacements in Y-direction, as observed also in the transient response of Figure 6.23. The introduction of heat results in a thermal expansion of the workpiece-sided axes. These effects are illustrated in Figure 6.29a, depicting the structural deformation associated to the FRF of Figure 6.27 at low frequencies. The time constant of the system is another interesting information that can be extracted from the FRF. The model predicts that the time constant associated to the displacements in Y-direction is slower than the time constant of the response in Z-direction. The considerations of the different time constants of the thermal response in Y- and Z-direction is of interest when designing a thermal error compensation strategy. ![Figure 6.27: FRF response of the machine tool of Figure 6.18. Input: heat losses at the machine elements. Output: TCP displacements relative to the workpiece for the X-, Y- and, Z-direction.](image) Figure 6.28 depicts the thermal FRF of the investigated machine tool for the variations of the machine tool temperature inside the MR. The outputs of the FRF is the displacements between the TCP and the workpiece in X-, Y-, and Z-direction. The FRF shows that the modification of the temperature inside the working space results in a thermally induced deviation predominantly in Y-direction. Figure 6.29b illustrates the structural deformation of the machine tool associated to the FRF of Figure 6.28 at low frequencies. Figure 6.29b shows that the main effect of the change of the air temperature is a linear thermal expansion of the B- and C-axis, resulting in displacements in Y-direction. The time constant of the deviation in Y-direction associated to the variation of the MR temperature is comparatively smaller than the time consonant due to internal heat sources. The difference between the time consonants associated to different inputs also needs to be accounted for in the design of thermal error compensation strategies. The rotation of the C-axis at results in a transient structural deformation of the workpiece-sided axes, as illustrated in Figure 6.29. However, the rotation of the C-axis also leads to a thermal response of the tool-sided axes. The variation of the air inside the enclosure modifies affects the part of the Z-axis inside the working space, as depicted in Figure 6.29b. Therefore, the tool-sided axes are accountable for part of the thermally induced displacements. If the TCP displacements are measured relative to the inertial system, i.e. not considering the workpiece as a reference, the thermal displacements in Z-direction are -2.4 µm for a rotational speed pf 1200 rpm. This corresponds to 12% of the relative deviation between TCP and workpiece. For other directions, the contribution to the total thermal displacements of the tool-sided axes is negligible. The fact that part of the thermal displacements are originated in the tool-sided axes have a great significance during the design of the thermal error compensation strategies. The workpiece-sided displacements in Z-direction at $B = 0^\circ$ result in displacements both in Z- and X-directions for other positions of the B-axis different than $0^\circ$. However, the tool-sided displacements in Z-direction are unaffected by the position of the B-axis. Therefore, the possibility to quantify and separate between the tool- and workpiece-sided displacements benefits directly the quality of the thermal error compensation. 6.3 Thermal error model: cutting fluid This section extends the thermo-mechanical model of Section 6.2 in order to account for the influence of the cutting fluid. Section 6.3.1 introduces describes the thermo-mechanical model and Section 6.3.2 presents the validation of the thermo-mechanical model. 6.3.1 Description of the thermo-mechanical model This section develops a thermo-mechanical model of the 5-axis machine tool of Figure 6.18 considering the influence of the cutting fluid. The thermal and mechanical connections between the parts as well as the convective boundary conditions outside the working space are not modified with respect to Section 6.2.1. The introduction of cutting fluid alters the convective boundary conditions of the structural parts inside the working space. The investigated machine tool supplies cutting fluid to the working space by... means of several nozzles located around the spindle. These nozzles can be reoriented freely in order to provide fluid to the desired part of the working space. In principle, the cutting fluid can affect any surface of the B- and C-axis. However, this work is limited to one orientation of the nozzles, namely all the fluid is directed into the C-axis. Figure 6.30 shows the structural parts of the C- and B-axis of the machine tool, depicting in yellow the areas affected by cutting fluid. The remaining external surfaces of the B-axis are exposed to the environment inside the MR. ![Figure 6.30: B- and C-axis of the machine tool of Figure 6.18. Structural parts affected by the cutting fluid are marked in yellow](image) The introduction of pressurized fluid media into the working space results in a transition from natural convection to forced convection. Empirical correlations define the HTC for forced convection on a flat plate \[ Nu = 0.664 \cdot \sqrt{Re} \cdot \sqrt[3]{Pr} \] (6.23) where \(Re\) is the Reynolds number and \(Pr\) is the Prandtl number defined as \[ Pr = \frac{\nu \cdot \rho \cdot c_p}{\lambda} \] (6.24) where \(c_p\) is the heat capacity, \(\rho\) is the fluid density, \(\nu\) is the kinematic viscosity, and \(\lambda\) is the thermal conductivity. The heat capacity of the cooling fluid is \(1670 \frac{J}{kgK}\), the density is \(860 \frac{kg}{m^3}\), the kinematic viscosity is \(6.8 \cdot 10^{-5} \frac{m^2}{s}\), and the thermal conductivity is \(0.140 \frac{W}{mK}\). ### 6.3.2 Validation of the thermo-mechanical model #### Thermal error measurement In order to validate the thermo-mechanical model, the thermal displacements are measured with the measurement setup described in Section 6.2.2. The cutting fluid is supplied continuously into the working space over 12 h. This experiment focuses on the isolated effect of the cutting fluid, i.e. it does not include any other heat sources such as the spindle or C-axis rotation. Figure 6.31 depicts the measured temperatures over the whole measurement time. The cutting fluid temperature rises up to 28 °C over the 12 h. An external tank with no temperature control stores the cutting fluid, which is supplied into the working space by an pump. The heat dissipated by the pump warms up the fluid, reaching a steady state value after 8 h. Figure 6.31 shows that the introduction of cutting fluid also alters the temperature of inside MR. The constant supply of fluid media results in a rise of the MR environmental temperature up to 20 °C. The cooling of the C-axis is active during the experiment, removing part of the heat introduced by the cutting fluid. The inlet and outlet temperature of the cutting fluid are recorded, as shown in Figure 6.31. The difference between these temperature signals provides the heat removed by the cooling system, according to Equation (6.22). ![Figure 6.31: Measured temperature of the environment inside the MR, inlet of the cooling, outlet of the cooling, and cutting fluid](image) **Parameter identification** The thermo-mechanical model considers that the cutting fluid affects the surfaces shown Figure 6.30. However, the cutting fluid does not flood completely these surfaces during the operation of the machine tool. Therefore, these surfaces are exposed to a combination of environmental temperature and cutting fluid. This effect is not deterministic, as it depends on the arrangement and orientation of the cutting fluid nozzles. Thus, a parameter identification based on the experimental data is required in order to account for the combination of environmental and cutting fluid temperature. The input vector \( \mathbf{u} \) defines the time varying inputs of the thermo-mechanical model as \[ \mathbf{u}(t) = \begin{bmatrix} T_{MR}(t), \dot{Q}_{cool}(t), T_{CF_C}(t), T_{CF_B}(t) \end{bmatrix}^T \] (6.25) where \( T_{MR} \) is the MR temperature, \( \dot{Q}_{cool} \) is the cooling power, \( T_{CF_C} \) is the cutting fluid temperature affecting the C-axis, and \( T_{CF_B} \) is the cutting fluid temperature affecting the B-axis. Two empirical coefficients, \( k_C \) and \( k_B \), are defined in order to account for the combination of environmental and cutting fluid temperature, which affects the structural parts of the B- and C-axis. These empirical coefficients, $k_B$ and $k_C$, scale the increase of temperature cutting fluid affecting the B- and C-axis respectively. The simulated and the measured displacements between the initial and final state over the 12 h are compared in order to perform the parameter identification. In Section 6.1.2, Equation (6.5) is introduced, which defines the thermal displacements between the initial and the final state. The input vector $u$ of Equation (6.5) is the difference between the initial and the final input signals, including the empirical coefficients as $$u = [\Delta T_{MR}, \Delta Q_{cool}, k_C \Delta T_{CF_C}, k_B \Delta T_{CF_B}]^T$$ \hspace{1cm} (6.26) The values of $k_C$ and $k_B$ are selected so that the difference between the simulated and measured displacements are minimized between the initial and final state. The parameter identification results in values of the empirical parameters of 2.88 and 1.97 for $k_C$ and $k_B$ respectively. In order to validate the thermo-mechanical model, a transient thermo-mechanical simulation is performed. The simulation uses as input the temperature signals Figure 6.31 and the identified empirical parameters. The displacements are evaluated every 10 min for the 12 h simulation time. Figure 6.32 illustrates the comparison between the measured and simulated thermal displacements over 12 h. The thermo-mechanical model succeeds in capturing the trends of the thermal displacements and well as the absolute values. ![Figure 6.32: Comparison of the measured (full line) and simulated (dashed) transient response in X-, Y- and Z-direction with cutting fluid continuously supplied over 12 h](image) The thermo-mechanical model shows that the introduction of cutting results in thermal displacements that are comparable in magnitude to other internal heat sources such as the rotation of the C-axis. The main difference with the thermal load case studied in Section 6.2 relies on the time constants. The step response to a rotation of the C-axis requires around 3 h for reaching a steady state value while the introduction of cutting fluid needs up to 8 h for the stabilization of the thermal displacements. This information is relevant for the design of the machine tool as well as the development of thermal error compensation strategies. Furthermore, the cutting fluid influences almost exclusively the workpiece-sided displacements, i.e. the deformation of the tool-sided structural part is negligible. Therefore, the thermal error compensation strategies need to account for this effect when the machine tool is operating at different positions of the B-axis. Conclusions and outlook The focus of this work is the development of methods that enable the efficient simulation of thermo-mechanical models of machine tools. In this dissertation, a full-featured simulation framework, MORe, is developed to accurately and efficiently simulate the thermo-mechanical behavior of machine tools. The review of the literature identifies that model order reduction (MOR) techniques are required in order to simulate efficiently the thermo-mechanical behavior of machine tools. In the reviewed literature, several reduction methods are available for the efficient simulation of general linear time invariant (LTI) system. However, the reduction approaches are not specifically suited to the characteristic thermo-mechanical behavior of thermal models of mechatronic systems. Therefore, this work develops a new MOR method to reproduce efficiently the thermal behavior of machine tools, the Krylov Modal Subspace (KMS) method. A decaying amplitude of the response with increasing frequency of excitation characterizes the thermal response of the systems under investigation. The KMS method uses this property to create the reduction basis. The reduced system reproduces the steady state response by including the information of the Krylov subspace basis with an expansion point at a low frequency. Additionally, the reduction basis represents the transient thermal response including the thermal modes of the system up to a certain frequency of interest. The KMS method computes a reduction basis projecting the original system into a subspace of lower dimension. The reduced system captures the most relevant part of the response in a certain frequency spectrum of interest. However, there is always an error between the original and reduced system associated with the reduction process. Thus, quantifying this error is required in order to assess the validity of the reduced model. One method to determine the error is the direct comparison of the response of the reduced and original models. However, both the reduction process as well as the evaluation of the original system is a computationally expensive process. Therefore, estimating the reduction error is needed in order to select the correct parameters for reduction. This thesis presents an a priori error estimator of the reduction for the KMS method. It is shown that the error estimator is an upper bound of the actual reduction error for the frequency spectrum of interest. The proposed estimator can be used to select the maximum eigenfrequency to be included in the KMS basis so that the error of the reduction remains below a certain value for the frequency spectrum of interest. The outputs of interest of the thermal error models of machine tools are the thermally induced displacements at different positions of the working space. Therefore, the model needs to evaluate structural deformations associated to inhomogeneous, time-varying temperature distributions. An approach to couple the thermal and the mechanical response of the system is presented. The developed coupling method creates a dedicated reduced mechanical system. On one hand, the mechanical system describes the response to any mechanical input, e.g. preloads or gravity, for any combination of the position of the axes. On the other hand, the mechanical system provides the response to any temperature distribution computed by the reduced system. The combination in a single reduced system of the mechanical and thermo-mechanical behavior is one of the main advantages of the developed method. The second main advantage of this method is that it directly couples the reduced thermal states as inputs to the reduced-order mechanical system. It avoids the computationally expensive process of projecting the reduced thermal states into the original system. Several physical parameters describe the thermal response of thermo-mechanical models of machine tools. The most relevant parameters are associated with the thermal boundary conditions, such as the thermal contact or the convective boundary conditions. The parameters describing these boundary conditions might change over time. However, conventional MOR approaches create reduced systems that are only valid for specific values of the physical parameters. Therefore, this work presents MOR methods in order to enable the traceability of some physical parameters after the reduction. The concept of thermal interfaces is introduced, which represent surfaces where the thermal boundary conditions are applied. This work distinguishes between two types of thermal interfaces, namely bushing and distributed interfaces. The bushing interfaces can be used to approximate thermal contact between two different parts. Based on the approximation of bushing interfaces, this thesis presents a method that enables to modify the position of a thermal contact area after reduction. This method proposes a trigonometric approximation of the thermal contact area by a finite number of harmonic functions. The trigonometric approximation describes continuously the contact area with a finite number of inputs, enabling the possibility to compute the thermal response of the system at different positions of the axes after the reduction. The parameters describing the convective boundary conditions are also subject to change over time. The distributed interfaces represent the convective heat exchange between the structure and the surrounding fluid media. This thesis introduces a method enabling the modification of the heat transfer coefficient (HTC) associated to the distributed interfaces. The developed method uses the concept of bilinearization, adapting it to the KMS reduction introduced for thermal systems. The main benefit of this MOR approach is that it enables the evaluation of the thermal response for any value of the HTC. The reduced system approximates the original system in the frequency spectrum of interest for any value of the HTC at a cost of a higher order reduced model. Another reduction approach is presented for varying convective boundary conditions. This method creates several reduced systems that are each valid for a single HTC and enables the direct interpolation between the systems. This reduction method can be applied to cases where the HTC of the model transitions between a finite number of discrete values, such as a sudden switch of the convective boundary conditions. The numerical methods developed in this work are implemented in a software package MORe. The software platform incorporates the methods presented in this work and provides the required functionalities to enable an efficient development of physical models of machine tools, including static, dynamical, and thermo-mechanical effects. The design of MORe ensures a straight-forward workflow during the model setup. Dedicated analyses facilitate the comprehensive investigation of the thermo-mechanical behavior of mechatronic systems, supported by a full-featured postprocessor with cutting-edge visualization tools. The developed MOR methods in combination with an efficient simulation platform are the key contribution of this work, providing the necessary tools for the development of thermo-mechanical models. of machine tools. This thesis illustrates the usability of the developed methods and software platform investigating the thermal behavior of two different case studies. A thermo-mechanical model of a 5-axis machine tool exposed to fluctuations of the environmental temperature is introduced. The parameters describing the convective heat exchange with the environment are relevant in order to represent the thermal behavior of the investigated machine tool. However, the values of the HTC are subject to uncertainties. Therefore, the bilinearization MOR reduction approach is applied in order to enable the evaluation of the behavior of the system for different values of the HTC. The reduced models enable a large amount of model evaluations, because of their reduced computational expense. Therefore, the sensitivity of the model outputs, i.e. tool center point (TCP) displacements relative to the workpiece, to the variation of the HTC can be calculated. This provides the information about the values of the boundary conditions that need to be constrained so that the simulated response matches the measured response. The thermo-mechanical model is validated for two different temperature profiles of the environmental temperature. This work also introduces several analysis tools, such as the frequency response or the thermal compliance matrix (TCM), in order to understand the thermo-mechanical behavior of the machine tool, identify uncertain model parameters, and enable design optimization. This work presents another study case focused on the investigation of the effect of internal heat sources. The machine tool under investigation is a 5-axis machine tool and the considered heat input is the rotation of the C-axis. In order to compute the thermally induced deformations, the different energy flows between the machine elements are estimated. The simulated thermal response of the machine tool is validated with thermal measurements of the TCP displacements relative to the workpiece for several rotational speeds of the C-axis. The thermo-mechanical model of the 5-axis machine tool is extended in order to account for the effect of introducing metal working fluid into the working space. The validated model serves as a virtual prototype to investigate the thermal design of the machine tool and assess the validity of the thermal error compensation strategies. Thermo-mechanical models are a great asset to understand and improve the thermal design of machine tool. However, translating the results of thermo-mechanical models into design principles can be complicated. This leads to the fact that thermo-mechanical models are used mainly in an academic context. Therefore, future research needs to concentrate on creating methods that assist the design of thermally stable machine tools using thermo-mechanical models. A good example is the dimensioning of cooling unit for different structural components. Combining the information of thermo-energetic models, which quantify the heat dissipated by the machine elements, and thermo-mechanical models, new design methodologies will facilitate the dimensioning of the cooling unit limiting the thermal errors in the whole working space. In order to contribute the design process, these methods need to be integrated in a software platform that provides dedicated macro models and analysis tools. The validity of thermo-mechanical models is tested comparing the simulation results with the measured response of the machine tool. However, the validation of thermo-mechanical models remains a time-consuming task that requires expert knowledge. Efficient thermo-mechanical models are a great asset during validation process. They enable to test a large number of model parameters in a computationally efficient manner. However, a systematic methodology for model validation is still missing. Future research in this field will investigate the optimal thermal loads in order to excite the thermal response in frequency range of interest. Instead of using random speed profiles or environmental temperatures of a workshop, designed inputs need to be used. Using system identification strategies in combination with the efficient modeling approaches presented in this work will facilitate the validation of the thermo-mechanical models. The methods and software platform developed in this work open a large number of possibilities for future research on thermal behavior of machine tools. The thermo-mechanical models presented in this work are limited to the investigation of the thermal response of the machine tool. However, a comprehensive model needs to consider the geometric errors of the manufactured part. Therefore, future research needs to include the thermal behavior of the workpiece and clamping system. This requires further research on quantifying the heat due the manufacturing process, as well as the investigating the response of the system with metal working fluid. Another aspect related to the manufacturing process is the reduction of the volume of the part. Future research needs to assess the effect of the volume reduction on the thermal response of the workpiece. Due to the volume loss of the workpiece during the manufacturing process, a large amount of chips is introduced into the working space. These chips convey part of the heat generated in the cutting zone. The evaluation of the effect of the chips on the thermal response of the machine tool is another topic requiring further investigations. The reduction of the thermal errors with external cooling units is a common practice in current machine tool design. However, removing all the heat dissipated in the machine elements is an energy demanding process. Therefore, an alternative to increasing the amount of cooling fluid is thermal error compensation. Thermal error compensation is based on predicting the thermal errors and offsetting the machine tool axes considering that information. In order to achieve a fully compensated machine tool, firstly a repeatable thermal behavior is required. Secondly, the design needs to ensure that the machine tool axes can compensate the resulting thermal errors, e.g. angular errors. Therefore, future research should provide design principles that enable the fully compensated machine tool. Furthermore, the reduced-order thermo-mechanical models are real time capable. They can be running in NC of the machine tool and be used directly for thermal error compensation. [1] (2018). MATLAB, Release 2018b. The MathWorks. [2] (2019). ANSYS Mechanical, Release 19. ANSYS, Inc. [3] (2020). Python Language Reference, Version 2.7. Python Software Foundation. [4] (2020). TraitsUI, Version 4. Enthought, Inc. [5] Amsallem D, Farhat C (2008) Interpolation Method for Adapting Reduced-Order Models and Application to Aeroelasticity. *Aiaa Journal - AIAA J* 46:1803–1813. [6] Amsallem D, Zahr M, Choi Y, Farhat C (2015) Design optimization using hyper-reduced-order models. *Structural and Multidisciplinary Optimization* 51(4):919–940. [7] Antoulas A C (2005) *Approximation of large-scale dynamical systems*. Society for Industrial and Applied Mathematics. [8] Antsaklis P, Michel A (1997) *Linear systems*. McGraw-Hill., New York. [9] Avilés R (2002) *Métodos de análisis para diseño mecánico*, Volume 2. Publicaciones - Escuela Superior de Ingenieros. [10] Bai Z, Skoogh D (2006) A projection method for model reduction of bilinear dynamical systems. *Special Issue on Order Reduction of Large-Scale Systems. Linear Algebra and its Applications* 415(2):406 – 425. [11] Bathe K (2006) *Finite Element Procedures*. Prentice Hall. [12] Baur U, Beattie C, Benner P, Gugercin S (2011) Interpolatory Projection Methods for Parameterized Model Reduction. *SIAM J. Sci. Comput.* 33(5):2489–2518. [13] Baur U, Benner P, Greiner A, Korvink J, Lienemann J, Moosmann C (2011) Parameter preserving model order reduction for MEMS applications. *Mathematical and Computer Modelling of Dynamical Systems* 17(4):297–317. [14] Bechtold T, Rudnyi E, Korvink J (2004) Error estimation for Arnoldi-based model order reduction of MEMS. *System* 10:15. [15] Belytschko T, Liu W K, Moran B (2000) *Nonlinear finite elements for continua and structures*. Wiley. [16] Benner P, Breiten T (2011) On H2-model reduction of linear parameter-varying systems. *PAMM* 11(1):805–806. [17] Benner P, Breiten T (2012) Interpolation-Based H2-Model Reduction of Bilinear Control Systems. *SIAM Journal on Matrix Analysis and Applications* 33(3):859–885. [18] Benner P, Gugercin S, Willcox K (2015) A survey of projection-based model reduction methods for parametric dynamical systems. *SIAM review* 57(4):483–531. [19] Benner P, Herzog R, Lang N, Riedel I, Saak J (2019) Comparison of model order reduction methods for optimal sensor placement for thermo-elastic models. *Engineering Optimization* 51(3):465–483. [20] Bhatia R (1997) *Matrix Analysis*, Volume 169. Springer. [21] Biegler L, Ghattas O, Heinkenschloss M, Keyes D, van Bloemen Waanders B (2007) *Real-Time PDE-Constrained Optimization*. Society for Industrial and Applied Mathematics. [22] Blaser P, Pavliček F, Mori K, Mayr J, Weikert S, Wegener K (2017) Adaptive learning control for thermal error compensation of 5-axis machine tools. *Journal of Manufacturing Systems* 44:302 – 309. Special Issue on Latest advancements in manufacturing systems at NAMRC 45. [23] Breiten T, Damü T (2010) Krylov subspace methods for model order reduction of bilinear control systems. *Systems & Control Letters* 59(8):443 – 450. [24] Bringmann B (2007) Improving geometric calibration methods for multi-axis machining centers by examining error interdependencies effects. Dissertation, Dissertation ETH Zürich, Nr. 17266. [25] Bruns A, Benner P (2015) Parametric model order reduction of thermal models using the bilinear interpolatory rational Krylov algorithm. *Mathematical and Computer Modelling of Dynamical Systems* 21(2):103–129. [26] Bruns T (2007) Topology optimization of convection-dominated, steady-state heat transfer problems. *International Journal of Heat and Mass Transfer* 50(15):2859 – 2873. [27] Bryan J (1990) International Status of Thermal Error Research (1990). *CIRP Annals - Manufacturing Technology* 39(2):645 – 656. [28] Bui-Thanh T, Willcox K, Ghattas O (2008) Model Reduction for Large-Scale Systems with High-Dimensional Parametric Input Space. *SIAM Journal on Scientific Computing* 30:3270–3288. [29] Cardone G, Astarita T, Carlomagno G (1997) Heat transfer measurements on a rotating disk. *International Journal of Rotating Machinery* 3(1):1–9. [30] Carslaw H, Jaeger J (1986) *Conduction of Heat in Solids*. Oxford science publications. Clarendon Press. [31] Changsoo Jang, Jong Young Kim, Yung Joon Kim, Jae Ok Kim (2003) Heat transfer analysis and simplified thermal resistance modeling of linear motor driven stages for SMT applications. *IEEE Transactions on Components and Packaging Technologies* 26(3):532–540. [32] Chen J, Yuan J, Ni J, Wu S (1993) Real-time compensation for time-variant volumetric errors on a machining center. *Journal of Engineering for Industry* 115(4):472–479. [33] Chen P, Quarteroni A (2015) A new algorithm for high-dimensional uncertainty quantification based on dimension-adaptive sparse grid approximation and reduced basis methods. *Journal of Computational Physics* 298(Supplement C):176–193. [34] Chen P, Quarteroni A, Rozza G (2017) Reduced Basis Methods for Uncertainty Quantification. *SIAM/ASA Journal on Uncertainty Quantification* 5(1):813–869. [35] Chen P, Schwab C (2016) *Model Order Reduction Methods in Computational Uncertainty Quantification*, 1–53. Springer International Publishing. [36] Denkena B, Scharschmidt K (2009) Sensitivitätsanalyse für ein Simulationsmodell. *wt Werkstatstechnik online Jahrgang* 99(5):294–299. [37] Donmez M, Blomquist D, Hocken R, Liu C, Barash M (1986) A general methodology for machine tool accuracy enhancement by error compensation. *Precision Engineering* 8(4):187 – 196. [38] Druskin V, Lieberman C, Zaslavsky M (2010) On Adaptive Choice of Shifts in Rational Krylov Subspace Reduction of Evolutionary Problems. *SIAM Journal on Scientific Computing* 32(5):2485–2496. [39] Ess M (2012) Simulation and compensation of thermal errors of machine tools. Dissertation, Dissertation ETH Zürich, Nr. 20300. [40] Galant A, Beitschmidt M, Großmann K (2016) Fast High-Resolution FE-based Simulation of Thermo-Elastic Behaviour of Machine Tool Structures. *Procedia CIRP* 46:627 – 630. [41] Gallivan K, Vandendorpe A, Dooren P V (2005) Model Reduction of MIMO Systems via Tangential Interpolation. *SIAM J. Matrix Anal. Appl.* 26(2):328–349. [42] Gebhardt M (2014) Thermal behaviour and compensation of rotary axes in 5-axis machine tools. Dissertation, Dissertation ETH Zürich, Nr. 21733. [43] Gebhardt M, Cube P v, Knapp W, Wegener K (2012) Measurement set-ups and -cycles for thermal characterization of axes of rotation of 5-axis machine tools. In *Proceedings of the 12th euspen International Conference, Stockholm, June 2012*. [44] Gesellschaft V (2005) *VDI-Wärmeatlas*. Number v. 1 in VDI-Buch. Springer Berlin Heidelberg. [45] Golub G H, Van Loan C F (1996) *Matrix Computations*. The Johns Hopkins University Press, third Edition. [46] Gontarz A, Weiss L, Wegener K (2010) Energy Consumption Measurement with a Multichannel Measurement System on a machine tool. In *Proceedings of International Conference on Innovative Technologies : IN-TECH 2010*, 499 – 502. [47] Grepl M (2005) Reduced-Basis Approximation and A Posteriori Error Estimation for Parabolic Partial Differential Equations. Dissertation, MIT, Cambridge, MA. [48] Grepl M A, Patera A T (2005) A posteriori error bounds for reduced-basis approximations of parametrized parabolic partial differential equations. *ESAIM: M2AN* 39(1):157–181. [49] Grimme E J (1997) Krylov projection methods for model reduction. Dissertation, University of Illinois at Urbana-Champaign Urbana-Champaign, IL. [50] Großmann K (2015) *Thermo-energetic Design of Machine Tools*. Springer. [51] Gugercin S, Antoulas A C, Beattie C (2008) $\mathcal{H}_2$ model reduction for large-scale linear dynamical systems. *SIAM journal on matrix analysis and applications* 30(2):609–638. [52] Harris T, Kotzalas M (2006) *Essential Concepts of Bearing Technology*. Rolling Bearing Analysis, Fifth Edition. CRC Press. [53] Heisel U, Popov G, Stehle T, Dragov A (2003) Wärmetübergangsbedingungen an Werkzeugmaschinenwänden. *dina die maschine* 57:24–27. [54] Herzog R, Riedel I (2015) Sequentially optimal sensor placement in thermoelastic models for real time applications. *Optimization and Engineering* 16(4):737–766. [55] Herzog R, Riedel I, Uciński D (2018) Optimal sensor placement for joint parameter and state estimation problems in large-scale dynamical systems with applications to thermo-mechanics. *Optimization and Engineering* 19(3):591–627. [56] Hindmarsh A (1982) *ODEPACK, a Systematized Collection of ODE Solvers*. Lawrence Livermore National Laboratory. [57] Hinton E, Rock T, Zienkiewicz O C (1976) A note on mass lumping and related processes in the finite element method. *Earthquake Engineering & Structural Dynamics* 4(3):245–249. [58] Horejš O, Mareš M, Novotný L (2012) Advanced Modelling of Thermally Induced Displacements and Its Implementation into Standard CNC Controller of Horizontal Milling Center. *Procedia CIRP* 4:67 – 72. [59] Huynh D, Knezevic D, Patera A (2012) Certified reduced basis model validation: A frequentistic uncertainty framework. *Comput. Methods Appl. Mech. Engrg.* 201:13–24. [60] Ibaraki S, Hong C F (2012) Thermal Test for Error Maps of Rotary Axes by R-Test. In *Emerging Technology in Precision Engineering XIV*, Volume 523 of *Key Engineering Materials*, 809–814. Trans Tech Publications Ltd. [61] Iooss B, Lemaître P (2015) A Review on Global Sensitivity Analysis Methods. *Uncertainty management in Simulation-Optimization of Complex Systems: Algorithms and Applications* 101–122. [62] ISO 10791 (2015) Test conditions for machining centres – Part 1: Geometric tests for machines with horizontal spindle (horizontal Z-axis). Technichal Report, International Organization for Standardization. [63] ISO 14955 (2018) Machine tools – Environmental evaluation of machine tools – Part 2: Methods for measuring energy supplied to machine tools and machine tool components. Technical Report, International Organization for Standardization, Geneva, Switzerland. [64] ISO 230 (2007) Test code for machine tools – Part 3: Determination of thermal effects. Technichal Report, International Organization for Standardization, Geneva, Switzerland. [65] ISO 230 (2014) Test code for machine tools – Part 2: Determination of accuracy and repeatability of positioning of numerically controlled axes. Technichal Report, International Organization for Standardization, Geneva, Switzerland. [66] Jackson C P (1981) Singular capacity matrices produced by low-order Gaussian integration in the finite element method. *International Journal for Numerical Methods in Engineering* 17(6):871–877. [67] Jedrzejewski J, Kaczmarek J, Reifur B (1988) Description of the Forced Convection along the Walls of Machine-Tool Structures. *CIRP Annals – Manufacturing Technology* 37(1):397 – 400. [68] Jones E, Oliphant T, Peterson P (2001) SciPy: Open Source Scientific Tools for Python . [69] Kohút P, Horejš O, Mareš M (2012) The influence of a heat transfer coefficient probe on fluid flow near wall. *EPJ Web of Conferences* 25:01042. [70] Kunisch K, Volkwein S (2001) Galerkin proper orthogonal decomposition methods for parabolic problems. *Numerische Mathematik* 90(1):117–148. [71] Kürschner P (2016) Efficient Low-Rank Solution of Large-Scale Matrix Equations. Dissertation, Otto-von-Guericke-Universität Magdeburg. [72] Lang N, Saak J, Benner P (2014) Model order reduction for systems with moving loads. *at-Automatisierungstechnik* 62(7):512–522. [73] Lassila T, Rozza G (2010) Parametric free-form shape design with PDE models and reduced basis method. *Computer Methods in Applied Mechanics and Engineering* 199(23):1583–1592. [74] LeGresley P, Alonso J (2000) Airfoil design optimization using reduced order models based on proper orthogonal decomposition. In *Fluids: 2000 conference and exhibit*, Fluid Dynamics and Co-located Conferences. American Institute of Aeronautics and Astronautics. [75] Li X S (2005) An Overview of SuperLU: Algorithms, Implementation, and User Interface. *ACM Trans. Math. Softw.* 31(3):302–325. [76] Manzoni A, Quarteroni A, Rozza G (2012) Shape optimization for viscous flows by reduced basis methods and free-form deformation. *International Journal for Numerical Methods in Fluids* 70(5):646–670. [77] Marelli S, Sudret B (2014) UQLab: A Framework for Uncertainty Quantification in Matlab. In *Vulnerability, Uncertainty, and Risk: Quantification, Mitigation, and Management*. [78] Mayr J (2009) Beurteilung und Kompensation des Temperaturganges von Werkzeugmaschinen. Dissertation, Dissertation ETH Zürich, Nr. 18677. [79] Mayr J, Ess M, Pavliček F, Weikert S, Spescha D, Knapp W (2015) Simulation and measurement of environmental influences on machines in frequency domain. *CIRP Annals – Manufacturing Technology* 64(1):479 – 482. [80] Mayr J, Gebhardt M, Massow B B, Weikert S, Wegener K (2014) Cutting Fluid Influence on Thermal Behavior of 5-axis Machine Tools. *Procedia CIRP* 14:395 – 400. [81] Mayr J, Jedrzejewski J, Uhlmann E, Donmez M A, Knapp W, Härtig F, Wendt K, Moriwaki T, Shore P, Schmitt R, Brecher C, Würz T, Wegener K (2012) Thermal issues in machine tools. *CIRP Annals - Manufacturing Technology* 61(2):771 – 791. [82] Mayr J, Weikert S, Wegener K (2007) Comparing the thermo-mechanical-behavior of machine tool frame designs using a FDM-FEA simulation approach. *Proceedings of the 22nd Annual ASPE Meeting, ASPE 2007* 17–20. [83] Mian N S, Fletcher S, Longstaff A, Myers A (2013) Efficient estimation by FEA of machine tool distortion due to environmental temperature perturbations. *Precision Engineering* 37(2):372 – 379. [84] Mohammadi A, Züst S, Mayr J, Blaser P, Sonne M R, Hattel J H, Wegener K (2017) A methodolBibliography [85] Mohler R R (1991) *Nonlinear Systems: Applications to bilinear control*. Nonlinear Systems. Prentice Hall. [86] Mori M, Mizuguchi H, Fujishima M, Ido Y, Mingkai N, Konishi K (2009) Design optimization and development of CNC lathe headstock to minimize thermal deformation. *CIRP Annals* 58(1):331 – 334. [87] Mou J (1997) A method of using neural networks and inverse kinematics for machine tools error estimation and correction. *Transactions-American Society of Mechanical Engineers Journal of Manufacturing Science and Engineering* 119:247–254. [88] Mou J, Donmez M, Cetinkunt S (1995) An adaptive error correction method using feature-based analysis techniques for machine performance improvement, Part 1: Theory derivation. *Journal of engineering for industry* 117(4):584–590. [89] Nadal E, Chinesta F, Diez P, Fuenmayor F, Denia F (2015) Real time parameter identification and solution reconstruction from experimental data using the Proper Generalized Decomposition. *Computer Methods in Applied Mechanics and Engineering* 296(Supplement C):113 – 128. [90] Naumann A, Lang N, Partzsch M, Beitelsschmidt M, Benner P, Voigt A, Wensch J (2016) Computation of thermo-elastic deformations on machine tools a study of numerical methods. *Production Engineering* 10(3):253–263. [91] Naumann C, Riedel I, Ihlenfeldt S, Priber U (2016) Characteristic Diagram Based Correction Algorithms for the Thermo-elastic Deformation of Machine Tools. *Procedia CIRP* 41:801 – 805. [92] Nguyen N C, Rozza G, Huynh D B P, Patera A T (2010) *Reduced Basis Approximation and a Posteriori Error Estimation for Parametrized Parabolic PDEs: Application to Real-Time Bayesian Parameter Estimation*, 151–177. John Wiley & Sons, Ltd. [93] Pagani S, Manzoni A, Quarteroni A (2017) Efficient State/Parameter Estimation in Nonlinear Unsteady PDEs by a Reduced Basis Ensemble Kalman Filter. *SIAM/ASA Journal on Uncertainty Quantification* 5(1):890–921. [94] Panzer H, Mohring J, Eid R, Lohmann B (2010) Parametric Model Order Reduction by Matrix Interpolation. *Automatisierungstechnik* 58:475–484. [95] Partzsch M, Beitelsschmidt M, Khonsari M M (2018) A method for correcting a moving heat source in analyses with coarse temporal discretization. *Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science* 232(15):2736–2750. [96] Pavliček F (2019) Parametrierbare Metamodelle zur Berechnung des Wärmübergangs in Hohlräumen. Dissertation, Technische Universität Chemnitz, Chemnitz. [97] Pavliček F, Dietz F, Blaser P, Züst S, Mayr J, Weikert S, Wegener K (2016) An approach for developing meta models out of fluid simulations in enclosures of precision machine tools. In *31st ASPE Annual Meeting*, 456–461. [98] Pavliček F, Hernandez P, Mayr J, Weikert S, Züst S, Wegener K (2016) Influence of machine housing on the thermal TCP displacement. In *euspen - Special Interest Group Meeting: Thermal Issues*. [99] Petzold L (1983) Automatic Selection of Methods for Solving Stiff and Nonstiff Systems of Or[100] Phillips J R (2000) Projection frameworks for model reduction of weakly nonlinear systems. In *Proceedings 37th Design Automation Conference*, 184–189. [101] Phillips J R (2003) Projection-based approaches for model reduction of weakly nonlinear, time-varying systems. *IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems* 22(2):171–187. [102] Putz M, Richter C, Regel J, Bräunig M (2018) Industrial consideration of thermal issues in machine tools. *Production Engineering* 12(6):723–736. [103] Ramachandran P, Varoquaux G (2011) Mayavi: 3D Visualization of Scientific Data. *COMPUTING IN SCIENCE & ENGINEERING* 13(2):40–50. [104] Saad Y (2003) *Iterative Methods for Sparse Linear Systems*. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2nd Edition. [105] Saari J (1998) Thermal analysis of high-speed induction machines. G4 monografiaiväitöskirja, Helsinki University of Technology. [106] Salimbahrami B, Lohmann B (2002) Krylov Subspace Methods in Linear Model Order Reduction: Introduction and Invariance Properties. In *Methods and Applications in Automation*. [107] Saltelli A, Tarantola S, Campolongo F, Ratto M (2004) *Sensitivity Analysis in Practice: A Guide to Assessing Scientific Models*. Halsted Press, New York, NY, USA. [108] Schwenke H, Knapp W, Haitjema H, Weckenmann A, Schmitt R, Delbressine F (2008) Geometric error measurement and compensation of machines An update. *CIRP Annals* 57(2):660 – 675. [109] Shi H, Ma C, Yang J, Zhao L, Mei X, Gong G (2015) Investigation into effect of thermal expansion on thermally induced error of ball screw feed drive system of precision machine tools. *International Journal of Machine Tools and Manufacture* 97:60 – 71. [110] Shi X, Zhu K, Wang W, Fan L, Gao J (2018) A thermal characteristic analytic model considering cutting fluid thermal effect for gear grinding machine under load. *The International Journal of Advanced Manufacturing Technology* 99(5):1755–1769. [111] Sobol I (2001) Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates. *Mathematics and Computers in Simulation* 55(1):271 – 280. The Second IMACS Seminar on Monte Carlo Methods. [112] Sobol I M (1993) Sensitivity Estimates for Nonlinear Mathematical Models. *Mathematical Modeling and Computational experiment* 1:407–414. [113] Speschu D (2018) Framework for efficient and accurate simulation of the dynamics of machine tools. Dissertation, TU Clausthal. [114] Speschu D, Weikert S, Retka S, Wegener K (2018-08-24) Krylov and Modal Subspace based Model Order Reduction with A-Priori Error Estimation. [115] Speschu D, Weikert S, Wegener K (2018-08-24) Modelling of Moving Interfaces for Reduced-Order Finite Element Models using Trigonometric Interpolation. [116] Sudret B (2008) Global sensitivity analysis using polynomial chaos expansions. *Reliability Engineering & System Safety* 93(7):964 – 979. Bayesian Networks in Dependability. [117] Sun L, Ren M, Hong H, Yin Y (2017) Thermal error reduction based on thermodynamics structure optimization method for an ultra-precision machine tool. *The International Journal of Advanced Manufacturing Technology* 88(5):1267–1277. [118] Thiem X, Großmann K, Mühl A, Kauschinger B (2015) Challenges in the Development of a Generalized Approach for the Structure Model Based Correction. In *Progress in Production Engineering*, Volume 794 of *Applied Mechanics and Materials*, 387–394. [119] Treuille A, Lewis A, Popović Z (2006) Model Reduction for Real-time Fluids. *ACM Trans. Graph.* 25(3):826–834. [120] Urata E (2007) Influence of unequal air-gap thickness in servo valve torque motors. *Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science* 221(11):1287–1297. [121] Veroy K, Patera A T (2005) Certified real-time solution of the parametrized steady incompressible Navier-Stokes equations: rigorous reduced-basis a posteriori error bounds. *International Journal for Numerical Methods in Fluids* 47(8-9):773–788. [122] Veroy K, Prud’homme C, Rovas D, Patera A (2003) A Posteriori Error Bounds for Reduced-Basis Approximation of Parametrized Noncoercive and Nonlinear Elliptic Partial Differential Equations. In *16th AIAA Computational Fluid Dynamics Conference*, Fluid Dynamics and Co-located Conferences, 3847. American Institute of Aeronautics and Astronautics. [123] Wegener K, Gittler T, Weiss L (2018) Dawn of new machining concepts:: Compensated, intelligent, bioinspired. *Procedia CIRP* 77:1 – 17. 8th CIRP Conference on High Performance Cutting (HPC 2018). [124] Wegener K, Weikert S, Mayr J (2016) Age of Compensation–Challenge and Chance for Machine Tool Industry. *International Journal of Automation Technology* 10(4):609–623. [125] Weidermann F (2001) Praxisnahe thermische Simulation von Lagern und Führungen in Werkzeugmaschinen. In *19th CAD-FEM Users Meeting*. [126] Weikert S (2004) R-Test, a New Device for Accuracy Measurements on Five Axis Machine Tools. *CIRP Annals - Manufacturing Technology* 53(1):429 – 432. [127] Weng L, Gao W, Lv Z, Zhang D, Liu T, Wang Y, Qi X, Tian Y (2018) Influence of external heat sources on volumetric thermal errors of precision machine tools. *The International Journal of Advanced Manufacturing Technology* 99(1):475–495. [128] Wolf T, Panzer H, Lohmann B (2011) Gramian-based error bound in model reduction by Krylov subspace methods. *IFAC Proceedings Volumes* 44(1):3587 – 3592. 18th IFAC World Congress. [129] Yang H, Ni J (2003) Dynamic modeling for machine tool thermal error compensation. *Transactions-American Society of Mechanical Engineers Journal of Manufacturing Science and Engineering* 125(2):245–254. [130] Yang H, Ni J (2005) Adaptive model estimation of machine-tool thermal errors based on recursive dynamic modeling strategy. *International Journal of Machine Tools and Manufacture* 45(1):1 – 11. [131] Zhu J, Ni J, Shih A J (2008) Robust machine tool thermal error modeling through thermal mode concept. *Journal of Manufacturing Science and Engineering* 130(6):061006. [132] Züst S, Gontarz A, Pavlíček F, Mayr J, Wegener K (2015) Model Based Prediction Approach for Internal Machine Tool Heat Sources on the Level of Subsystems. *Procedia CIRP* 28:28 – 33. [133] Züst S D (2017) Model Based Optimization of Internal Heat Sources in Machine Tools. Dissertation, Dissertation ETH Zürich, Nr. 24482. [134] Zwingenberger C (2014) Beitrag zur Verbesserung der Simulationsgenauigkeit bei der Bestimmung des thermischen Verhaltens von Werkzeugmaschinen: Dissertation Carsten Zwingenberger, Berichte aus dem IWU. Dissertation, Fraunhofer-Institut für Werkzeugmaschinen und Umformtechnik IWU. Implementation of the KMS reduction Section 3.2 introduces the KMS reduction for thermal systems and its algorithmic implementation in Algorithm 1. This algorithm relies on numerical methods that are summarized in this Appendix for completion. The numerical methods presented here are based on the implementation in MORe [113]. Algorithm 5 Block Arnoldi 1: procedure BLOCKARNOLDI($A, E, B, s_e, m_e$) 2: $A, E, B$ \hspace{1cm} \triangleright System matrices 3: $s_e, m_e$ \hspace{1cm} \triangleright Expansion point and number of moments 4: for $i = 0 : m_e - 1$ do \hspace{1cm} \triangleright Loop over all the moments 5: if $i = 0$ then 6: $V_i = (A - s_e E)^{-1} B$ 7: $V_i = \text{ModGS}(V_i)$ \hspace{1cm} \triangleright Orthogonalization of the basis 8: $V = V_i$ 9: else 10: $V_i = (A - s_e E)^{-1} E V_i$ 11: $V_i = \text{RED RANGE}(V_i)$ \hspace{1cm} \triangleright Reduce range of the basis $V_i$ 12: $V = \text{ORTH}(V, V_i)$ \hspace{1cm} \triangleright Extend range of $V$ with the basis $V_i$ 13: return $V$ Algorithm 6 Modal Basis 1: procedure MODAL(A, E, ωm, nguess, nmax, se) 2: A, E, B ▷ System matrices 3: ωm ▷ Maximum considered eigenfrequency 4: nmax, nguess ▷ Maximum number of modes and guessed number of modes below ωm 5: se ▷ Expansion point 6: ωi = 0 ▷ Initialize variables 7: i = 0 8: nstep = nmax−nguess 4 ▷ nmax is the maximum number of modes, nguess is the guessed number of modes 9: OP = LU(A − seE) ▷ LU decomposition of (A − seE) 10: while ωi ≤ ωm and i < 5 do ▷ ωm is the highest frequency 11: nmodes = (i + 1)nstep ▷ Number of modes to be evaluated 12: Φ, ω = EIGSH(OP, nmodes ) ▷ Provides nmodes first eigenvalues ω and eigenvectors Φ 13: ωi = max ω ▷ Save the highest eigenvalue 14: i = i + 1 15: return Φ, ω Algorithm 7 Orthogonalization 1: procedure ORTH(V1, V2) 2: V1 = MODGS(V1) ▷ Orthonormalize the bases 3: V2 = MODGS(V2) 4: ncol ▷ Number of columns of V2 5: V = V1 6: for i = 0 : ncol − 1 do ▷ Loop over all columns of V2 7: vi = V2(:, i) ▷ Get ith column of V2 8: vi = vi/||vi||2 ▷ Normalize vi 9: t = V′ vi 10: if ||t||2 > ε then ▷ ε is the deflation tolerance 11: vi = vi − Vt 12: V = [V, vi] ▷ Concatenate vi column 13: return V Algorithm 8 Reduce range 1: procedure RedRange($V_1$, $V_2$) 2: \hspace{1em} $V_1 = \text{ModGS}(V_1)$ \hspace{1em} \triangleright Orthonormalize the bases 3: \hspace{1em} $V_2 = \text{ModGS}(V_2)$ 4: \hspace{1em} $V = []$ \hspace{1em} \triangleright Create an empty matrix $V$ 5: \hspace{1em} $n_{col}$ \hspace{1em} \triangleright Number of columns of $V_1$ 6: \hspace{1em} for $i = 0 : n_{col} - 1$ do \hspace{1em} \triangleright Loop over all columns of $V_1$ 7: \hspace{2em} $\mathbf{v}_i = V_1(:, i)$ \hspace{1em} \triangleright Get ith column of $V_1$ 8: \hspace{2em} $\mathbf{v}_i = \frac{\mathbf{v}_i}{\|\mathbf{v}_i\|_2}$ \hspace{1em} \triangleright Normalize $\mathbf{v}_i$ 9: \hspace{2em} $\mathbf{t} = V_2^T \mathbf{v}_i$ 10: \hspace{2em} if $\|\mathbf{t}\|_2 > \epsilon$ then \hspace{1em} \triangleright $\epsilon$ is the deflation tolerance 11: \hspace{3em} $\mathbf{v}_i = \mathbf{v}_i - V_2 \mathbf{t}$ 12: \hspace{2em} $V = [V, \mathbf{v}_i]$ \hspace{1em} \triangleright Concatenate $\mathbf{v}_i$ column 13: \hspace{1em} $V = \text{ModGS}(V)$ \hspace{1em} \triangleright Orthonormalize the resulting basis $V$ 14: return $V$ Algorithm 9 Modified Gram Schmidt 1: procedure ModGS($V$) 2: \hspace{1em} $V_{orth} = \frac{V(:, 0)}{\|V(:, 0)\|_2}$ \hspace{1em} \triangleright Take the first column of $V$ and normalize it 3: \hspace{1em} $n_{col}$ \hspace{1em} \triangleright Number of columns of $V$ 4: \hspace{1em} for $i = 1 : n_{col} - 1$ do \hspace{1em} \triangleright Loop over all columns of $V_1$ except the first one 5: \hspace{2em} $\mathbf{v}_i = V(:, i)$ \hspace{1em} \triangleright Get $i$th column of $V$ 6: \hspace{2em} $\mathbf{v}_i = \frac{\mathbf{v}_i}{\|\mathbf{v}_i\|_2}$ \hspace{1em} \triangleright Normalize $\mathbf{v}_i$ 7: \hspace{2em} $\mathbf{t} = V_{orth}^T \mathbf{v}_i$ 8: \hspace{2em} if $\|\mathbf{t}\|_2 > \epsilon$ then \hspace{1em} \triangleright $\epsilon$ is the deflation tolerance 9: \hspace{3em} $\mathbf{v}_i = \mathbf{v}_i - V_{orth} \mathbf{t}$ 10: \hspace{2em} $V_{orth} = [V_{orth}, \frac{\mathbf{v}_i}{\|\mathbf{t}\|_2}]$ \hspace{1em} \triangleright Concatenate $\mathbf{v}_i$ column 11: return $V_{orth}$ Implementation of the Finite Element Method This appendix provides further details about the FEM and its implementation in the MORe software. B.1 Numerical integration of surface elements The FEM discretization requires the integration of the shape functions over the domain of the element, as explained by Avilés in [9]. The integration is for instance required for the evaluation of the Neumann boundary conditions, as stated in Equation (3.10) and (4.1). The shape functions of the elements are defined in natural coordinates \((\xi, \eta)\). The location of a point \(z\) inside the element \(\Omega_e\) can be described in terms of the natural coordinates as \[ z(\xi, \eta) = n_e(\xi, \eta) z^e \] (B.1) where \(z^e\) is the coordinates of the nodes of the element \(e\). The integration of Equation (4.1) can be expressed in natural coordinates as \[ w^e = \int_{\Gamma_e} n_e(\xi, \eta) w(z(\xi, \eta)) J_e(\xi, \eta) d\xi d\eta \] (B.2) where \(J_e(\xi, \eta)\) is the Jacobian of transformation between coordinates. For a triangular surface element with midnodes, the shape function in natural coordinates is where $\nu = 1 - \xi - \eta$. The Jacobian can be expressed in natural coordinates as $$J_e(\xi, \eta) = \begin{bmatrix} -3 + 4\xi + 4\eta & -3 + 4\xi + 4\eta \\ 4\xi - 1 & 0 \\ 0 & 4\eta - 1 \\ 4 - 8\xi - 4\eta & -4\xi \\ 4\eta & 4\xi \\ -4\eta & 4 - 8\eta - 4\xi \end{bmatrix}$$ For a quad element with midnodes the shape function is $$n_e(\xi, \eta) = \begin{bmatrix} \frac{1}{4}((1 - \eta)(1 - \xi)(-\eta - \xi - 1)) \\ \frac{1}{4}((1 - \eta)(1 + \xi)(-\eta + \xi - 1)) \\ \frac{1}{4}((1 + \eta)(1 + \xi)(\eta + \xi - 1)) \\ \frac{1}{4}((1 + \eta)(1 - \xi)(\eta - \xi - 1)) \\ \frac{1}{2}((1 - \eta)(1 - \xi^2)) \\ \frac{1}{2}((1 + \xi)(1 - \eta^2)) \\ \frac{1}{2}((1 + \eta)(1 - \xi^2)) \\ \frac{1}{2}((1 - \xi)(1 - \eta^2)) \end{bmatrix}$$ The Jacobian for the quad element is \[ J_e(\xi, \eta) = \begin{bmatrix} \frac{1}{4}(1 - \eta)(2\xi + \eta) & \frac{1}{4}(1 - \xi)(2\eta + \xi) \\ \frac{1}{4}(1 - \eta)(2\xi - \eta) & \frac{1}{4}(1 + \xi)(2\eta - \xi) \\ \frac{1}{4}(1 + \eta)(2\xi + \eta) & \frac{1}{4}(1 + \xi)(2\eta + \xi) \\ \frac{1}{4}(1 + \eta)(2\xi - \eta) & \frac{1}{4}(1 - \xi)(2\eta - \xi) \\ \frac{1}{2}(1 - \eta)(-2\xi) & \frac{1}{2}(-1 + \xi^2) \\ \frac{1}{2}(1 - \eta^2) & \frac{1}{2}(1 + \xi)(-2\eta) \\ \frac{1}{2}(1 + \eta)(-2\xi) & \frac{1}{2}(1 - \xi^2) \\ \frac{1}{2}(-1 + \eta^2) & \frac{1}{2}(1 - \xi)(-2\eta) \end{bmatrix} \] (B.6) These integral of Equation (B.2) is computed numerically as a sum of a finite number of terms by means of Gaussian quadrature. \[ w^e = \sum_i^{n_g} g_i n_e(\xi^i, \eta^i) w(z(\xi^i, \eta^i)) J_e(\xi^i, \eta^i) \] (B.7) where $\xi^i$ and $\eta^i$ are the Gauss integration points and $g_i$ is the weight function. Table B.1 and B.2 provide the gauss points and weights for the numerical integration according to Equation (B.7) of a triangular and a quad element. ### B.2 Thermal solid elements The objective of thermal error models of machine tools is to describe the thermally induced deviations between the TCP and the workpiece. This requires an efficient coupling of the thermal and the mechanical response of the system, as explained in Section 3.4. In order to compute the body forces associated to any temperature distribution, the coupling matrix of Equation (3.65) is required. As explained in Figure 5.1, the FE discretization of the components is performed in Ansys [2]. This commercial software platform provides the possibility to extract the coupling matrix of Equation (3.65) from several multiphysics elements, such as SOLID226 and SOLID227. These elements are solid elements with midnodes with thermal and mechanical dof. The discretization of the elasticity equations requires the use of elements with midnodes in order to represent the structural deformation. However, for the thermal system, the elements with midnodes have an excessive number of nodes in order to represent the smooth solution of the conductivity equations. In addition to this, there are some other numerical issues with the capacity matrix $C_{th}$ defined in Equation (3.11). According to Jackson [66], the capacity matrix becomes singular after the integration with 14 gauss points of the element capacity matrix $C'_{th}$ of Equation (3.7). In order to avoid numerical issues with singular matrices, a diagonalized heat capacity matrix is used, according to Hinton et al. [57]. The definition of the convection matrix $K'_{conv}$ of Equation (3.9) is leads to other numerical issues. The numerical integration of Equation (3.9) evaluated the integral in the gauss points. Therefore, several off-diagonal terms appear in $K'_{conv}$. This is called consistent convection matrix. Bruns [26] pointed out that consistent convection matrix might lead to numerical oscillations of the dof close to the convective boundary condition. Therefore, a diagonalization of the convection matrices is used in this work to avoid these numerical issues. Table B.1: Gauss points for numerical integration of a triangular element | Gauss Point \((\xi^i, \eta^i)\) | Weight \(g_i\) | |----------------------------------|----------------| | \(\frac{1}{3}, \frac{1}{3}\) | 1 | | \(\frac{1}{6}, \frac{1}{6}\) | \(\frac{1}{3}\) | | \(\frac{2}{3}, \frac{1}{6}\) | \(\frac{1}{3}\) | | \(\frac{1}{6}, \frac{1}{3}\) | \(\frac{1}{3}\) | | \(\frac{1}{3}, \frac{1}{3}\) | \(-\frac{27}{48}\) | | \(\frac{1}{5}, \frac{3}{5}\) | \(\frac{25}{48}\) | | \(\frac{1}{5}, \frac{1}{5}\) | \(\frac{25}{48}\) | | \(\frac{3}{5}, \frac{1}{5}\) | \(\frac{25}{48}\) | | 0.44594849091597, 0.44594849091597 | 0.22338158967801 | | 0.44594849091597, 0.10810301816807 | 0.22338158967801 | | 0.10810301816807, 0.44594849091597 | 0.22338158967801 | | 0.09157621350977, 0.09157621350977 | 0.10995174365532 | | 0.09157621350977, 0.81684757298046 | 0.10995174365532 | | 0.81684757298046, 0.09157621350977 | 0.10995174365532 | | 0.33333333333333, 0.33333333333333 | 0.22500000000000 | | 0.47014206410511, 0.47014206410511 | 0.13239415278851 | | 0.47014206410511, 0.05971587178977 | 0.13239415278851 | | 0.05971587178977, 0.47014206410511 | 0.13239415278851 | | 0.10128650732346, 0.10128650732346 | 0.12593918054483 | | 0.10128650732346, 0.79742698535309 | 0.12593918054483 | | 0.79742698535309, 0.10128650732346 | 0.12593918054483 | Table B.2: Gauss points for numerical integration of a quad element | Gauss Point \((\xi^i, \eta^j)\) | Weight \(g_i\) | |----------------------------------|----------------| | 0, 0 | 2 | | 0.577350269189626, 0.577350269189626 | 1 | | −0.577350269189626, 0.577350269189626 | 1 | | −0.577350269189626, −0.577350269189626 | 1 | | 0.577350269189626, −0.577350269189626 | 1 | | −0.774596669241483, −0.774596669241483 | 0.308641975308642 | | 0., −0.774596669241483 | 0.493827160493827 | | 0.774596669241483, −0.774596669241483 | 0.308641975308642 | | −0.774596669241483, 0. | 0.493827160493827 | | 0., 0. | 0.790123456790123 | | 0.774596669241483, 0. | 0.493827160493827 | | −0.774596669241483, 0.774596669241483 | 0.308641975308642 | | 0., 0.774596669241483 | 0.493827160493827 | | 0.774596669241483, 0.774596669241483 | 0.308641975308642 | | 0.861136311594053, 0.861136311594053 | 0.121002993285602 | | 0.861136311594053, 0.339981043584856 | 0.226851851851852 | | 0.861136311594053, −0.339981043584856 | 0.226851851851852 | | 0.861136311594053, −0.861136311594053 | 0.121002993285602 | | 0.339981043584856, 0.861136311594053 | 0.226851851851852 | | 0.339981043584856, 0.339981043584856 | 0.425293303010694 | | 0.339981043584856, −0.339981043584856 | 0.425293303010694 | | 0.339981043584856, −0.861136311594053 | 0.226851851851852 | | −0.339981043584856, 0.861136311594053 | 0.226851851851852 | | −0.339981043584856, 0.339981043584856 | 0.425293303010694 | | −0.339981043584856, −0.339981043584856 | 0.425293303010694 | | −0.339981043584856, −0.861136311594053 | 0.226851851851852 | | −0.861136311594053, 0.861136311594053 | 0.121002993285602 | | −0.861136311594053, 0.339981043584856 | 0.226851851851852 | | −0.861136311594053, −0.339981043584856 | 0.226851851851852 | | −0.861136311594053, −0.861136311594053 | 0.121002993285602 | Additional information of the thermo-mechanical models This appendix provides additional information of the thermo-mechanical model of the Mori Seiki NMV 5000 DCG developed in Chapter 6. Table C.1 provides the mechanical properties of the machine elements. The data are obtained from the data sheets provided by the manufacturer of the machine elements. If no data is available, the mechanical properties are taken from data of similar machine elements. Figure C.1, C.2, and C.3 are the temperature data of MR environment, inlet, and outlet of the cooling during the rotation of the C-axis over 6h at several rotational speeds. The temperature data is an input for the thermo-mechanical model of Figure 6.24 and 6.26. A Butterworth low-pass filter with a cutoff frequency of 0.001 rad/s removes the faster oscillations of the input data. The preprocessing of the data is required in order to ensure a good performance the adaptive time step solver during the transient simulation. ### Table C.1: Stiffness at the mechanical links [Pa] | Link | Axial | Transversal | Normal | Roll | Pitch | Yaw | |-------------------------------------------|--------|-------------|--------|--------|-------|-------| | Support | $6 \cdot 10^7$ | $6 \cdot 10^7$ | $4 \cdot 10^7$ | 0 | 0 | 0 | | Motor linear axes | 0 | 0 | 0 | $1 \cdot 10^5$ | 0 | 0 | | Bearing back ballscrew Y and Z | $1.62 \cdot 10^9$ | $1.62 \cdot 10^9$ | $1.62 \cdot 10^9$ | 0 | 0 | 0 | | Bearing back ballscrew X | $1.47 \cdot 10^9$ | $1.47 \cdot 10^9$ | $1.47 \cdot 10^9$ | 0 | 0 | 0 | | Bearing front ballscrew X, Y, and Z | $1.62 \cdot 10^9$ | $1.62 \cdot 10^9$ | $1.62 \cdot 10^9$ | 0 | 0 | 0 | | Bearing B | $1 \cdot 10^9$ | $1 \cdot 10^9$ | $1 \cdot 10^9$ | 0 | $1 \cdot 10^8$ | $1 \cdot 10^8$ | | Bearing C | $1 \cdot 10^9$ | $1 \cdot 10^9$ | $1 \cdot 10^9$ | 0 | $1 \cdot 10^8$ | $1 \cdot 10^8$ | | Linear guide X and Y | 0 | $1 \cdot 10^9$ | $1 \cdot 10^9$ | $3 \cdot 10^5$ | $3 \cdot 10^5$ | $3 \cdot 10^5$ | | Linear guide Z | 0 | $2.8 \cdot 10^8$ | $2.8 \cdot 10^8$ | $2.8 \cdot 10^8$ | $2.8 \cdot 10^8$ | $2.8 \cdot 10^8$ | | Ballscrew X (pitch 0.02 m) | $5.4 \cdot 10^8$ | 0 | 0 | 0 | 0 | 0 | | Ballscrew Y (pitch 0.015 m) | $4.5 \cdot 10^8$ | 0 | 0 | 0 | 0 | 0 | | Ballscrew Z (pitch 0.015 m) | $5.4 \cdot 10^8$ | 0 | 0 | 0 | 0 | 0 | ### Figure C.1: Measured temperature of the environment inside the MR, inlet of the cooling, and outlet of the cooling during the rotation of the C-axis at 600 rpm over 6h Figure C.2: Measured temperature of the environment inside the MR, inlet of the cooling, and outlet of the cooling during the rotation of the C-axis at 900 rpm over 6h Figure C.3: Measured temperature of the environment inside the MR, inlet of the cooling, and outlet of the cooling during the rotation of the C-axis with the speed profile of Figure 6.25 over 6h List of publications Peer-reviewed publications in international scientific journals P. Hernández-Becerro, J. Purtchert, J. Konvicka, C. Buesser, D. Schranz, J. Mayr, K. Wegener, Efficient thermo-mechanical model of the environmental variation error of a 5-axis machine tool, Journal of Manufacturing Science and Engineering, 2020 https://doi.org/10.1115/1.4047739 P. Hernández-Becerro, D. Spescha, K. Wegener, Model order reduction of thermo-mechanical models with parametric convective boundary conditions: focus on machine tools, Computational Mechanics, 2020 https://doi.org/10.1007/s00466-020-01926-x J. Mayr, P. Blaser, A. Ryser, P. Hernández-Becerro, An adaptive self-learning compensation approach for thermal errors on 5-axis machine tools handling an arbitrary set of sample rates, CIRP Annals, 2018 https://doi.org/10.1016/j.cirp.2018.04.001 Peer-reviewed conference proceedings P. Hernández-Becerro, J. Mayr, K. Wegener, Reduced thermo-mechanical model of a rotary table of a 5-axis precision machine tool, ASPE Spring Topical Meeting - Design and Control of Precision Mechatronic System, 2020, Cambridge, Massachusetts, USA P. Hernández-Becerro, J. Mayr, K. Wegener, Efficient thermo-mechanical model of a precision 5-axis machine tool, Conference on Thermal Issues in Machine Tools, 2020, Aachen, Germany P. Hernández-Becerro, P. Blaser, J. Mayr, K. Wegener, Design improvement of the cutting fluid supply of a large 5-axis machine tool, 33rd ASPE Annual Meeting, 2018, Las Vegas, Nevada, USA, Volume 70, Pages 53-56 P. Hernández-Becerro, J. Mayr, P. Blaser, F. Pavliček, K. Wegener, Model Order Reduction of Thermal Models of Machine Tools with Varying Boundary Conditions, Conference on Thermal Issues in Machine Tools, 2018, Dresden, Germany P. Hernández-Becerro, P. Blaser, J. Mayr, S. Weikert, K. Wegener, Measurement of the effect of the cutting fluid on the thermal response of a five-axis machine tool, Laser Metrology and Machine Performance XII, 2017, Renishaw Innovation Center, Wotton-under-Edge, United Kingdom P. Blaser, C. Hauschel, R. Rüttimann, P. Hernández-Becerro, J. Mayr, K. Wegener, Thermal characterization and modelling of a gantry-type machine tool linear axis, Proceedings of the 19th International Conference of the European Society for Precision Engineering and Nanotechnology, 2019, Bilbao, Spain J. Mayr, F. Pavliček, S. Züst, P. Blaser, P. Hernández-Becerro, S. Weikert, K. Wegener, Thermal error research, an overview, Laser Metrology and Machine Performance XII, 2017, Wotton under Edge, United Kingdom, ISBN 978-0-9566790-9-3, Volume 12, Pages 10–31 P. Blaser, J. Mayr, F. Pavliček, P. Hernández-Becerro, K. Wegener, Adaptive learning control for thermal error compensation on 5-axis machine tools with sudden boundary condition changes, Conference on Thermal Issues in Machine Tools, 2018, Dresden, Germany F. Pavliček, D. Pamies, J. Mayr, S. Züst, P. Blaser, P. Hernández-Becerro, K. Wegener, Using meta models for enclosures in machine tools, Conference on Thermal Issues in Machine Tools, 2018, Dresden, Germany, ISBN 978-3-95735-085-5 P. Blaser, P. Hernández-Becerro, J. Mayr, M. Wiessner, K. Wegener, Thermal errors of a large 5-axis machine tool due to cutting fluid influences - evaluation with thermal test piece, American Society for Precision Engineering, 2017, Charlotte, USA, ISBN 978-1-887706-74-2, Volume 32, Pages 531-536 Other relevant publications P. Hernández-Becerro, A new framework for thermo-mechanical models of machine tools, CIRP Winter Meeting, STC-P, Paris, 2019 P. Hernández-Becerro, N. Zimmermann, F. Pavliček, P. Blaser, W. Knapp, J. Mayr, K. Wegener, Learning Efficient Modeling and Compensation for Thermal Behavior of Machine Tools, MTTRF 2018 Annual Meeting, 2018, USA L. Meier, P. Hernández-Becerro, K. Wegener, Constant Power In-process Tribometry & Simulative Investigation of Thermal Machine Tool Errors, MTTRF 2019 Annual Meeting, 2019, USA N. Zimmermann, P. Hernández-Becerro, P. Blaser, J. Mayr, Laboratory practice energy efficient production, ETH Learning and Teaching Journal, 2018, ETH Zurich, Educational Development and Technology (LET) List of supervised theses The following (unpublished) theses were supervised by the author - Joel Purtschert, Thermo-Mechanical Model of a Precision 5-Axis Machine Tool, Master thesis, Spring semester 2019 - Philip Satz, Development of Volumetric Thermal Error Measurement Procedure for Validation of Thermo-Mechanical FEM-Models of Machine Tools, Master thesis, Spring semester 2018 - Raphael Wyssling, Uncertainty Assessment of Thermal Models of Machine Tools (in German), Bachelor thesis, Fall semester 2017
CIVIL APPELLATE Before the Hon'ble Mr. Justice A. R. Dave PAM PHARMACEUTICALS v. RICHARDSON VICKS INC. & ORS.* (A) Trade and Merchandise Marks Act, 1958 (XLIII of 1958) — Sec. 62 — Civil Procedure Code, 1908 (V of 1908) — Sec. 20 — Suit praying for injunction against marketing of product bearing mark similar to trade mark of plaintiff — Question relating to territorial jurisdiction of Court — Plaintiff selling its product VICKS in Ahmedabad — Court Commissioner found defendant sold its product VICAS at Ahmedabad — Order of trial Court holding *prima facie* that City Civil Court had jurisdiction upheld. The question with regard to jurisdiction is a mixed question of law and facts. There is an averment in the plaint that cough drops ‘VICAS’ manufactured by defendant No. 1 are being sold in Ahmedabad. It has also been averred in the plaint that ‘VICKS’ manufactured by the plaintiffs is also being sold everywhere in the country. Upon reading the said averments, *prima facie* it is clear that defendant No. 1 is selling ‘VICAS’ in Ahmedabad, and therefore, the City Civil Court, Ahmedabad, has jurisdiction to entertain the suit. It is pertinent to note that the Court Commissioner who was appointed by the City Civil Court had in fact found cough drops ‘VICAS’ being sold in Ahmedabad on 20-2-1999. This fact clearly denotes that the averment which has been made in the plaint with regard to sale of cough drops ‘VICAS’ in Ahmedabad is found to be correct. (Para 28) Section 20 of the C.P.C. also provides that when cause of action has arisen at several places, the suit can be entertained at any of the places where the cause of action had arisen. In the instant case, it is not in dispute that the Court Commissioner has found that the defendants were selling their cough drops under trade name ‘VICAS’ in Ahmedabad. (Para 31) In the instant case, at the interlocutory stage, the defendants have been restrained from manufacturing and selling the product in question as *prima facie* it has been found that the defendants were violating statutory rights of the plaintiffs. Still evidence has not been adduced and *prima facie* it has been found that the products of the plaintiffs and defendants are sold in Ahmedabad. In the circumstances, *prima facie* it appears that the impugned order of the trial Court has not resulted into failure of justice. (Para 32) (B) Trade & Merchandise Marks Act, 1958 (XLIII of 1958) — Sec. 29 — Specific Relief Act, 1963 (XLVII of 1963) — Secs. 37 & 38 — Injunction against manufacturing or selling product bearing mark similar to trade mark of plaintiff — Question whether the two marks are deceptively similar has to be examined from the perspective of person of overage intelligence and imperfect recollection — Defendant selling cough drops under the name VICAS in packets deceptively similar to those of plaintiff, holders of registered trade mark VICKS — Injunction granted by trial Court, confirmed. *Decided on 24-3-2000. Appeal From Order No. 236 of 1999 against order dated 6-4-1999 passed below Notice of Motion in Civil Suit No. 854 of 1999 by City Civil Court, Ahmedabad.* It is a well established principle that for the purpose of ascertaining whether the goods are deceptively similar, one has not to look at the goods like a meticulous or a methodical person, who is having excellent or photogenic memory and who makes a comparison every time when he purchases goods. In such a case one has to look at an average person with average memory and imperfect recollection. In the instant case, the two products with which the Court is concerned are ‘VICKS’ and ‘VICAS’. (Para 40) The Court is concerned with a person of average intelligence and imperfect recollection who is likely to err while making a decision with regard to purchase of cough drops. In the instant case, one has to see whether the product of the defendants is likely to cause confusion. The defendants might not be having any intention to deceive a customer but the product of the defendants should not be such which would even cause confusion in the mind of a buyer of average intelligence and imperfect recollection. (Para 40) It is pertinent to note that both the products were produced before the trial Court and before this Court. Upon perusal of both the products, the first impression which the learned Judge of the trial Court had was that the product manufactured by defendant No. 1 is deceptively similar to the one which is manufactured by the plaintiffs. The mode in which the words ‘VICKS’ and ‘VICAS’ have been written and the get-up in which they are being sold are quite similar. So, as to ascertain whether the products are deceptively similar, one has not to have careful examination of both the products or to compare them by keeping them side by side, but one has to be guided by common sense and general observation. If, at the first sight, both the products appear to be similar, one can very well say that there is an element of deceptive similarity in the products. (Para 44) It is very clear that a *prima facie* case has been established by the plaintiffs because the product which the defendants are selling is deceptively similar to the one which is manufactured and sold by the plaintiffs. Upon perusal of the record available to the Court, *prima facie*, it appears that the plaintiffs have acquired a very good reputation and its products under the name ‘VICKS’ are being sold not only in the country but also elsewhere. Looking to the said fact, in the Court’s opinion, balance of convenience would surely tilt in favour of the plaintiffs. In the case, where medicinal products are being sold, which are deceptively similar, harm would not only be caused to the plaintiffs but it would also be caused to innocent consumers who, as a result of confusion, might purchase a medicinal product prepared by another person which they in fact never wanted to buy. In the instant case, it has also been submitted by the learned Advocates that ingredients of both the products *i.e.* the product manufactured by the plaintiffs and defendant No. 1 are different. If a consumer having an intention to have ingredients of product A, buys product B having different ingredients, it would not be in the interest of the consumer as he would be consuming medicines which are absolutely different than the one which he wanted to consume. In the Court’s opinion, in case of medicinal products, the Court has to be more cautious and has to grant injunction where the Court feels that an innocent and unwary consumer is likely to have some confusion while identifying the product which he would like to purchase. In the matter of grant of injunction, the Court has also to see whether non-interference of the Court would result into irreparable injury to the party seeking the injunction. If the Court comes to a conclusion that there is no other remedy available to the party except the one with regard to injunction, the plaintiff seeking injunction should be suitably protected so that the apprehended injury may not be caused to the plaintiff. (Para 48; See Para 50) [Ed.: For a judgment on similar lines involving VICKS and VIKAS See Raj Remedies v. Richardson Vicks, 2000 (3) GLR 2323. For a recent judgment of Supreme Court on phonetic similarity involving PIKNIK and PICNIC where injunction was refused as dissimilarities appeared clear and more striking, see S. M. Dyechem Ltd. v. Cadbury (India) Ltd., 2000 (3) GLR 2548 (SC).] (C) VADE MECUM — Procedural aspects involving technicalities — Court has to give importance to substance and not to procedural matters. (See: Paras 37 & 38) P. M. Diesels Ltd. v. Patel Field Marshal Industries (1), M/s. Richardson Vicks Inc. v. Vicas Pharmaceuticals (2), Koopilan’s Uncen’s daughter Pathumma v. Koopilan Kutty (3), M/s. Jay Industries v. M/s. Nakson Industries (4), Sangram Singh v. Election Tribunal, Kotah (5), Corn Products Refining Co. v. Shangrila Food Products Ltd. (6), Parkar-Knoll Ltd., v. Knoll International Ltd. (7), Ranbaxy Laboratories v. Dua Pharmaceuticals Ltd. (8), Wander Limited v. Antox India P. Ltd. (9), N. R. Dongre v. Whirlpool Corporation (10), relied on. R. R. Shah, for the Petitioner. K. S. Nanavati, for R. M. Chhaya, for Respondent No. 1. Rule Served for Respondent No. 3. A. R. DAVE, J. Being aggrieved by an order dated 6-4-1999 passed below the Notice of Motion in Civil Suit No. 854 of 1999, the appellant-original defendant No. 1 has approached this Court by way of this appeal from order. For the sake of convenience, the parties to the litigation have been referred to as they have been arrayed before the trial Court. The appellant, defendant No. 1, has been aggrieved by the impugned order because, by virtue of the impugned order, during pendency of the suit, defendant No. 1 has been restrained from using mark ‘VICAS’ or any other mark, which is likely to infringe trade mark “VICKS” which is being used by the plaintiffs. Moreover, defendant No. 1 has also been restrained from manufacturing, selling or offering for sale, medicinal preparation and allied products using trade mark ‘VICAS’ or any other trade mark which might be deceptively similar to trade mark ‘VICKS’ of the plaintiffs. 2. The facts giving rise to the litigation, as stated by the plaintiffs in their plaint, in a nutshell, are as under:— Plaintiff No. 1 is a corporation incorporated under the laws of the United States of America and the said plaintiff and its subsidiary companies are engaged in the business of manufacturing and marketing various medicinal products which are manufactured and sold under the trade mark ‘VICKS’ and plaintiff No. 1 is a proprietor of the said trade mark in India. (1) 198 PTC 18 (2) 1990 PTC 16 (3) AIR 1981 SC 1683 (4) 1992 PTC 94 (5) AIR 1955 SC 425 (6) AIR 1960 SC 142 (7) 1962 RPC 265 (8) AIR 1989 Delhi 44 (9) 1990 Supp. SCC 727 (10) 1996 (5) SCC 714 [Reproduction from GLROnLine] © Copyright with Gujarat Law Reporter Office, Ahmedabad So far as plaintiff No. 2 is concerned, it is a subsidiary company of plaintiff No. 1, which has been incorporated under the provisions of the Companies Act, 1956 in India and it is also engaged in the business of manufacturing and marketing of medicinal products under the trade mark ‘VICKS’. It is their case that plaintiff No. 2 is the originator and owner of copyright of artistic work and get up contained in label having dark and light green colour wherein the mark ‘VICKS’ has been written in a novel manner and cough drops manufactured by the plaintiffs are being sold under the name of ‘VICKS’. Plaintiff No. 1 is using the mark ‘VICKS’ for last about 100 years in respect of the medicinal preparations prepared by it and plaintiff No. 2, which is the subsidiary company of plaintiff No. 1, is manufacturing the medicinal preparations including cough drops under the trade mark ‘VICKS’ in India since 1971. Trade mark ‘VICKS’ has been registered under the provisions of the Trade and Merchandise Mark Act, 1958 (hereinafter referred to as the ‘Trade Mark Act’). The said mark has been registered at Regn. No. 328355 in Class V in respect of pharmaceutical, sanitary substances, infant foods, etc. It is the case of the plaintiffs that by use of the colour scheme adopted by them for the purpose of selling cough drops manufactured by them under the trade mark ‘VICKS’, they have tried to distinguish their product from the products which are being manufactured by other manufacturers. The label used by the plaintiffs for the purpose of sale of cough drops under trade mark ‘VICKS’ has been annexed to the plaint as Exh. 2/6. It has been submitted by the plaintiffs that their product ‘VICKS’ has got a very good reputation in the Indian market because of the superior quality of medicinal ingredients used by them in the cough drops manufactured by them. It has been also submitted by them that for the purpose of popularising their product in the market, they had been spending enormous amount on advertisements. It is their case that the product is being advertised throughout India including Gujarat, through the media like Doordarshan, Zee TV, Zee Cinema, Star Plus, Star Movies and other local media which are being used for advertising different products. The plaintiffs have also submitted that they had spent approximately Rs. 23 crores during 1993-98 for advertising their products ‘VICKS’ and as a result thereof, sale of their ‘VICKS’ products had increased from Rs. 45 crores per annum to Rs. 66 crores per annum from 1993-94 to 1997-98. Thus, they have mainly submitted that they are the owners of trade mark ‘VICKS’ which is very popular in the entire country on account of its high quality of medicinal ingredients and advertisement campaigns carried out by the plaintiffs or their agents from time to time and the word ‘VICKS’ has been treated as one of the synonyms for cough drops. 3. The plaintiffs had approached the City Civil Court, Ahmedabad, by filing Regular Civil Suit No. 854 of 1999 because, somewhere in January 1999, as they had learnt that defendant No. 2 was offering for sale cough drops under trade mark ‘VICAS’ written in a manner similar to the manner in which their trade mark ‘VICKS’ was written. It has been submitted by them that defendant No. 1 is manufacturing cough drops under mark ‘VICAS’ and the said mark is deceptively similar to the mark of the plaintiffs’ trade mark ‘VICKS’. It has been alleged in the plaint that because of the similarity in the mark and get up used by defendant No. 1, the defendants are trying to see that the product ‘VICAS’ is passed off to unwary customers as ‘VICKS’ and thereby the defendants are selling goods inferior in quality to the customers with a dishonest intention which would amount to infringement of the trade mark of the plaintiffs. Thus, it has been alleged by the plaintiffs that the defendants are trying to pass off the inferior quality of goods for the superior type of goods manufactured by the plaintiffs under the trade mark ‘VICKS’ and they are also violating the statutory rights of the plaintiffs under the Trade Mark Act as well as the Copyright Act, 1957. In the circumstances stated hereinabove, the suit has been filed by the plaintiffs with a prayer for a declaration that the defendants are not entitled to use the trade mark ‘VICAS’ and/or any other mark similar to the plaintiffs’ trade mark ‘VICKS’ and any other artistic work similar to the artistic work of the plaintiffs and the defendants and their agents, servants etc. be permanently restrained from using the mark ‘VICAS’ or any other similar mark to the plaintiffs’ trade mark ‘VICKS’. It is the plaintiffs’ case that defendant No. 1 is manufacturing cough drops under the mark ‘VICAS’ and defendant No. 2 is selling the same in the city of Ahmedabad. An averment has been made in the plaint to the effect that the product in question which is being manufactured by defendant No. 1 at Wadhwan is being sold in Ahmedabad by defendant No. 2. So as to substantiate the submissions and averments made in the plaint, the plaintiffs had prayed for appointment of a Court Commissioner so that the Court Commissioner can ascertain whether the averments and allegations made in the plaint by the plaintiffs were correct. Ultimately, the Court Commissioner appointed by the trial Court had visited the shop of defendant No. 2 on 20-2-1999 around 12-30 noon and had found that defendant No. 2 was selling cough drops named ‘VICAS’ manufactured by defendant No. 1. He reported to the trial Court that he had found 11 jars, each jar containing 300 sachets of cough drops under mark ‘VICAS’ at the shop of defendant No. 2. In reply to the Notice of Motion filed by the plaintiffs, defendant No. 1 has filed its reply denying all the allegations and stating that defendant No. 1 is manufacturing cough drops under mark ‘VICAS’ since October 1998 and it had sold cough drops worth more than Rs. 17 lacs and it is having a very effective sales network of pharmaceutical preparation named ‘VICAS’ under distinctive label, colour scheme and get up and it has been also submitted in the reply that the suit label ‘VICAS’ is not the property of plaintiff No. 1 and plaintiff No. 2 is not the proprietor of the label or trade mark ‘VICKS’ and plaintiff No. 2 is also not using the trade mark ‘VICKS’. It has been submitted that no action for infringement would lie against defendant No. 1. It has been specifically submitted that the registration of the plaintiffs’ trade mark under No. 328355 dated 30-8-1977 was in Class V in the name of Richardson Marrel, Inc. (a corporation organised and existing under the law of the State of Belaware, U.S.A.) and the plaintiffs had suppressed certain material facts with regard to the ownership of the said label and the licence agreement and validity of the agreement which was executed between plaintiff No. 1 and plaintiff No. 2. Moreover, it has been submitted in the reply that plaintiff No. 1 is not using the label ‘VICKS’ whereas plaintiff No. 2 is not entitled to use the label ‘VICKS’ and no sort of relationship between plaintiff No. 1 and plaintiff No. 2 was shown by the plaintiffs as required under the law. It is also the case of defendant No. 1 that defendant No. 1 is having necessary licence to manufacture medicinal product in question under the mark ‘VICAS’ in a packing of a particular colour scheme and get up since 1-10-1998 and it is selling the cough drops since 1-10-1998 and by the time the suit was filed, the sale had exceeded Rs. 17 lacs. Moreover, defendant No. 1 is not selling the product in question in the city of Ahmedabad and it never sold the product to defendant No. 2, and therefore, the City Civil Court, Ahmedabad had no jurisdiction to entertain the suit. Moreover, it is also contended that the application for injunction filed by the plaintiffs was not legal and was contrary to the provisions of the Trade Mark Act and Copyright Act, and therefore, the said application deserved to be dismissed. Moreover, even on the ground of misjoinder of causes, the suit should have been dismissed. Several other contentions have been raised in the written statement but mainly the contention of defendant No. 1 is with regard to the jurisdiction of the City Civil Court, Ahmedabad. It has been mainly submitted that the City Civil Court had no jurisdiction as no cause of action had arisen in the city of Ahmedabad before the suit was filed. For the first time and that too after filing the suit, the cough drops under trade name ‘VICAS’ were sold on 20-2-1999 by defendant No. 2 whereas the suit was filed on 18-2-1999. Thus, prior to 20-2-1999, the product in question was not sold by any of the defendants in the city of Ahmedabad, and therefore, the City Civil Court, Ahmedabad, had no jurisdiction to try the suit filed by the plaintiffs. It is also the case of defendant No. 1 that material document to show how the copyright was obtained by the plaintiffs was not shown to the Court, and therefore, action under the provisions of the Copyright Act was not maintainable. According to defendant No. 1, both the marks are not similar and because of the distinctive features they have, it is not possible to pass off product of defendant No. 1 as the product of the plaintiffs. The averments made by the plaintiffs with regard to superior quality of their product are also not admitted by defendant No. 1. It is also the case of defendant No. 1 that the colour scheme and get up of the sachet of the cough drops in which the cough drops of the plaintiffs are being sold have become common to the trade and number of persons manufacturing cough drops are using either same or similar get up and colour scheme on the sachet used by them for the purpose of selling their cough drops. Moreover, the plaintiffs had made several changes in the get up of the sachet. In the circumstances, the plaintiffs have no exclusive right to use the colour scheme and get up for their trade and business. Defendant No. 1 has also raised an objection with regard to capacity of the signatories to the plaint and the injunction application. The contention of defendant No. 1 is that the signatories to the plaint and the injunction application were not authorised by the plaintiffs to file the suit or the injunction application, and therefore, also the suit is not maintainable. 4. After hearing the concerned parties, the trial Court has granted injunction in favour of the plaintiffs whereby defendant No. 1 has been restrained from manufacturing or selling its product “VICAS” by an order dated 6-4-1999 and being aggrieved by the said order, defendant No. 1 has approached this Court by way of the present Appeal from Order. The trial Court has *prima facie* come to the conclusion that plaintiff No. 1 is the owner of the trade mark ‘VICKS’ and plaintiff No. 2, being a subsidiary company of plaintiff No. 1, engaged in the business of manufacturing and marketing the medicinal product in question under the trade mark ‘VICKS’, the plaintiffs have a right to use the mark ‘VICKS’ on the cough drops manufactured by plaintiff No. 2. With regard to the jurisdiction of the City Civil Court at Ahmedabad, the trial Court has *prima facie* come to the conclusion that the City Civil Court, Ahmedabad has jurisdiction to try the suit under the provisions of Sec. 105 of the Trade Mark Act because on the basis of the report filed by the Court Commissioner being Mark A/6, it was found that the cough drops manufactured by defendant No. 1 under the name ‘VICAS’ were being sold in the city of Ahmedabad. The trial Court has also *prima facie* come to the conclusion while passing the interlocutory order that even under the provisions of Sec. 62 of the Copyright Act, the Court has jurisdiction to try the suit because the said Section gives a discretion to the plaintiffs with regard to the place where the defendants can be sued and if the plaintiffs select Ahmedabad, the place where one of the plaintiffs is selling their product, namely, ‘VICKS’, it cannot be said that the City Civil Court has no jurisdiction. The trial Court has observed that looking to the special provisions incorporated in Sec. 62 of the Copyright Act, it is not obligatory on the part of the plaintiffs to file a suit where the defendant resides. As per the provisions of Sec. 62 of the Copyright Act, a suit can be filed even where the plaintiff is doing his business. In view of the fact that as the product of the plaintiffs is also being sold in Ahmedabad, the trial Court has come to the conclusion that the plaintiffs have a right to file a suit in the City Civil Court at Ahmedabad. The trial Court has *prima facie* found that the get up of the sachet used by defendant No. 1 and the sachet used by the plaintiffs are quite similar. Looking to the facts of the case, the trial Court has *prima facie* found that both the marks, *i.e.*, ‘VICKS’ and ‘VICAS’ are written in such a manner that in normal circumstances an unwary customer would not be in a position to appreciate the difference between the two sachets, and therefore, on account of the phonetic and visual resemblance, the mark ‘VICAS’ used by defendant No. 1 is deceptively similar to the mark ‘VICKS’ used by the plaintiffs. The trial Court has also *prima facie* found that the plaintiffs were in prior use of the copyright of the artistic work used on the sachet of the cough drops and looking to the facts stated hereinabove and the reasons stated in the impugned interlocutory order, the trial Court has *prima facie* come to the conclusion at the interlocutory stage that defendant No. 1 has violated rights of the plaintiffs emanating from the provisions of the Trade Mark Act and Copyright Act and as the defendants are trying to pass off their goods as goods of the plaintiffs, the trial Court, by an interlocutory order dated 6-4-1999, has restrained the defendants from manufacturing or selling or otherwise dealing with the product under mark ‘VICAS’. 5. I have heard learned Advocate Shri R. R. Shah appearing for the appellant-original defendant No. 1 and Sr. Advocate Shri K. S. Nanavati appearing for the respondent Nos. 1 and 2 - original plaintiffs. Though served, nobody has appeared for respondent No. 3-original defendant No. 2. 6. Learned Advocate Shri R. R. Shah appearing for the appellant has vehemently submitted that the impugned order passed by the trial Court is not only unjust and improper but is also illegal for the reason that the trial Court had no jurisdiction to entertain the suit. The sum and substance of the lengthy arguments advanced by the learned Advocate is that the trial Court has not looked into the fact that no cause of action had arisen at the time when the suit was filed. According to him, the suit was filed on 18-2-1999 and prior thereto the cough drops manufactured by defendant No. 1 under name ‘VICAS’ had not been sold in the city of Ahmedabad. According to him, for the first time, the Court Commissioner found on 20-2-1999 that defendant No. 2 was in possession of the cough drops under mark ‘VICAS’ manufactured by defendant No. 1. As per his submission, the cause of action must precede the filing of the suit, and therefore, it ought to have been established that before 18-2-1999 the defendants were manufacturing or selling cough drops under mark ‘VICAS’ in a particular get up in the city of Ahmedabad. 7. Moreover, it has been submitted by him that the suit is not maintainable on the ground of joinder of several causes of action. It has been submitted by him that one suit for different causes of action arising under different Acts is not maintainable, and therefore, the suit ought not to have been entertained. According to him, different suits ought to have been filed for ventilating grievances under the provisions of the Trade Mark Act, Copyright Act and for an action for passing off. 8. The learned Advocate has also advanced several technical objections pertaining to the procedural aspects. It has been submitted by the learned Advocate that though there are two plaintiffs, the plaint was signed only by one person, namely, Shri Deepak Acharya. As the plaint was signed by only one person *i.e.* for only one of the two plaintiffs, an application dated 30-3-1999 was submitted by the plaintiffs praying for a permission to the effect that Shri Deepak Acharya who had signed the plaint should be permitted to make necessary amendment in the plaint by making an endorsement that he was signing the plaint on behalf of both the plaintiffs. The said application was granted and in pursuance of the said order, necessary endorsement in the plaint was made. Though the trial Court had granted permission only for making an amendment in the plaint, Shri Acharya had also made such an endorsement on the injunction application. Similarly, Shri Deepak Acharya had also made such an endorsement on the *vakalatnama* at a later point of time. According to the learned Advocate, the endorsements made by Shri Acharya on the injunction application and his signing the *vakalatnama* without obtaining any permission from the trial court was improper, and therefore, the injunction application should have been considered as defective, and therefore, no order could have been passed on the said injunction application and as the trial Court had passed an order below the injunction application, the order passed by the trial Court on the injunction application dated 6-4-1999 is bad in law, and therefore, it should be quashed and set aside. 9. The learned Advocate has also submitted that there is no infringement of registered trade mark ‘VICKS’ for the reason that ‘VICKS’ is a very common name in the U.S.A. and other European countries, and as it is not an invented word, no proprietary right can be claimed in respect of the said term by the plaintiffs. Moreover, according to him, the words ‘VICKS’ and ‘VICAS’ are absolutely different, having different meanings and in normal circumstances, no person will be confused or deceived on account of dissimilarity between the said two words, and therefore, there cannot be any question with regard to infringement of the right of plaintiff No. 1. 10. It has also been submitted by him that on account of difference in name, get-up and colour scheme between both the products namely, product manufactured by the plaintiffs and defendant No. 1, there cannot be any question with regard to passing off. No proof with regard to deception or confusion had been produced by the plaintiffs before the trial Court to show that there was any case of deception or confusion, and therefore, also it cannot be said that defendant No. 1 was trying to pass off his goods as if they were the goods of the plaintiffs. 11. It has been further submitted by the learned Advocate that there was no breach of any provision of the Copyright Act especially in view of the fact that the agreement between plaintiff No. 1 and plaintiff No. 2 with regard to permitting plaintiff No. 2 to use the colour scheme and get-up of the sachet was neither registered nor placed on record and in the circumstances, the Court cannot take cognizance of the fact that plaintiff No. 1 had permitted plaintiff No. 2 to use the mark ‘VICKS’ with a particular colour scheme and get up. Moreover, for the purpose of establishing copyright, the plaintiffs ought to have placed on record the original work in respect of which the copyright was obtained by plaintiff No. 1. In the instant case, according to the learned Counsel, a mechanically printed sachet was placed on record and such a sachet cannot be used to show that plaintiff No. 1 had any copyright in respect of the get up and colour scheme of the sachet which is used by the plaintiffs. 12. According to the learned Advocate, there was no *prima facie* case for granting injunction in favour of the plaintiffs because the plaintiffs had not established breach of any of the provisions of the Copyright Act or the Trade Mark Act. According to him, there was no passing off. In the circumstances, there was no *prima facie* case in favour of the plaintiffs so as to interfere in the matter at an interlocutory stage by restraining defendant No. 1 from manufacturing or selling its product ‘VICAS’. 13. It has been also submitted by him that no irreparable loss would be caused to the plaintiffs if defendant No. 1 is not restrained from manufacturing or selling its product under the mark ‘VICAS’ for the reason that in the event of the plaintiffs succeeding in the suit, they can be adequately compensated in terms of money because it was possible for the trial court to direct the litigants to produce the details about their sales, profits etc. in respect of the product in question. Such facts and figures could have rendered sufficient help to the trial Court for determining the amount of compensation payable to the concerned party at the end of the trial. Defendant No. 1 had also shown its willingness to render accounts to the trial Court so as to facilitate the trial court in awarding the amount of compensation which could have been awarded to the plaintiffs in the event of their succeeding in the suit. For the reasons stated hereinabove, it has also been submitted by the learned Advocate that the balance of convenience was not in favour of the plaintiffs, and therefore, defendant No. 1 could not have been prevented from manufacturing or selling their product under the mark ‘VICAS’. 14. On the other hand, Sr. Advocate Shri Nanavati appearing for the plaintiffs-respondent Nos. 2 and 3, has supported the impugned order passed by the trial Court whereby the defendants have been restrained from manufacturing and selling the said product under the mark ‘VICAS’. He too has relied upon several judgments delivered by different High Courts and the Hon’ble Supreme Court. 15. The sum and substance of the arguments advanced by Sr. Advocate Shri Nanavati in support of the case of the plaintiffs is that defendant No. 1 has violated the right given to the plaintiffs not only by the statutes but also by the common law, because by manufacturing and selling their product under the mark ‘VICAS’ with a colour scheme and get up which was in prior use by the plaintiffs, defendant No. 1 was trying to pass off its product as product of the plaintiffs. It has been submitted by him that the trial Court was justified in granting the injunction in view of the fact that get up of the sachets used by both the manufacturers and the words ‘VICAS’ and ‘VICKS’ are so similar that ‘VICAS’ cough drops can be easily passed off as ‘VICKS’. 16. So far as the aspect of jurisdiction is concerned, it has been submitted by him that on the basis of the averments made in the plaint, the trial Court has to consider whether the Court has jurisdiction to entertain the plaint. At the time of considering the question regarding jurisdiction at the interlocutory stage, the Court need not look at the written statement or documents which are not referred to in the plaint. Only on the basis of the plaint, the question with regard to jurisdiction is to be decided at the initial stage and as submitted by him, the plaintiffs had made out a case in the plaint to the effect that the City Civil Court, Ahmedabad had jurisdiction because ‘VICAS’ was being sold in Ahmedabad and the said fact was later on established by the report of the Court Commissioner. 17. It has been further submitted by him that so far as the jurisdiction is concerned, the law is to the effect that even if the Court exercises jurisdiction not vested in it, the order passed by the Court would not be illegal if there is no failure of justice. He has thus submitted that, in the instant case, assuming without admitting that the City Civil Court, Ahmedabad has no jurisdiction, the defendants have not established that there was failure of justice and therefore, exercise of jurisdiction by the trial Court was absolutely justified. 18. It has also been submitted by him that as per the provisions of Sec. 20 of the C.P.C., as part of the cause of action has arisen in city of Ahmedabad, the City Civil Court at Ahmedabad has jurisdiction. He has submitted that there is an averment in the plaint that the defendants are selling cough drops under trade name ‘VICAS’ with a get up similar to that of ‘VICKS’ manufactured by the plaintiffs in Ahmedabad and the said fact was ultimately found to be correct when the Court Commissioner found that defendant No. 2 was selling cough drops under trade name ‘VICAS’ in city of Ahmedabad. Thus, it has been submitted by him that as per the provisions of Sec. 20 of the C.P.C., part of the cause of action has arisen in Ahmedabad by sale of cough drops in Ahmedabad, and therefore, the Court has jurisdiction to entertain the suit. 19. With regard to jurisdiction, it has been further submitted by Sr. Advocate Shri Nanavati that the question of jurisdiction is a mixed question of fact and law. The fact of cough drops ‘VICAS’ being sold in city of Ahmedabad has already been established by virtue of the report submitted by the Court Commissioner, and therefore, at this stage it cannot be said that the trial Court has no jurisdiction to entertain the suit. 20. With regard to technical objections regarding signing of the *Vakalatnama* and an application praying for an injunction, it has been submitted by Sr. Advocate Shri K. S. Nanavati that the said objections should be ignored at this stage because such procedural defects, if any, should not come in the way of the Court in the process of doing justice. Regarding maintainability of the suit, it has been submitted by him that it was not necessary for the plaintiffs to file separate suits under different statutes as subject-matter of the suit and parties to the suit are same. Accordingly to him, the suit is maintainable in the form in which it has been filed. 21. It has been submitted by Sr. Advocate Shri Nanavati that still the stage of evidence has not come, and therefore, it cannot be said that there would not be any evidence with regard to sale of cough drops under trade name ‘VICAS’ prior to filing of the suit. Evidence is yet to be led. Moreover, he has also submitted that *quia timet* action is also maintainable in case of infringement of a registered trade mark. It has been submitted by him that there is a statement in the plaint to the effect that the plaintiffs are doing the business in the entire country, and therefore, it has been impliedly stated that the cough drops under trade name ‘VICKS’ are also being sold in Ahmedabad by the plaintiffs. Thus, Sr. Advocate Shri Nanavati has submitted that there is an averment to the effect that cough drops ‘VICKS’ are being sold in Ahmedabad and till some evidence is led to the contrary, it cannot be concluded that cough drops ‘VICKS’ are not being sold in Ahmedabad and it cannot be said that the trial Court has no jurisdiction. 22. With regard to maintainability of suit by plaintiff No. 2, it has been submitted by Sr. Advocate Shri Nanavati that plaintiff No. 2 is a licensee and plaintiff No. 1 is a registered owner or proprietor of the mark, and therefore, it cannot be said that the suit has not been filed by the registered proprietor of the mark ‘VICKS’. This is in reply to the submission made by learned Advocate Shri R. R. Shah that suit was not maintainable in view of the fact that plaintiff No. 2 is only a licensee. 23. Sr. Advocate Shri K. S. Nanavati has relied upon the observations made by the trial Court with regard to similarity in the get up of both the products i.e. ‘VICKS’ and ‘VICAS’. It has been submitted by him that the defendants are trying to pass off their inferior quality of goods as goods of the plaintiffs. He has shown the sachets in question to substantiate his submission that get up of both the sachets is quite similar. He has also tried to show that phonetically both names, ‘VICKS’ and ‘VICAS’, are also similar, and therefore, the product manufactured by defendant No. 1 is deceptively similar to the product of the plaintiffs. 24. I have heard learned Advocate Shri R. R. Shah and Sr. Advocate Shri K. S. Nanavati and have also gone through a catena of judgments cited by them. Looking to the fact that several judgments have been cited, I do not think it necessary to refer to each and every judgment, but I shall be referring to only those judgments which are of vital importance for the purpose of arriving at the final conclusion in this appeal. 25. Before dealing with the submissions made by the learned Advocates, I must note that this Court is conscious of the fact that the present proceedings have been initiated at an interlocutory stage. Still the evidence has not been adduced. In this set of circumstances, the question is as to what extent this Court should interfere with the interlocutory order passed by the trial Court. Moreover, making observations which might not be really warranted in this appeal might adversely affect the parties to the litigation at the time when the suit is finally decided, and therefore, I would like to restrain myself from making any such observation which might cause some prejudice to any of the litigants in the suit. This Court is also conscious of the fact that much delay should not be caused in the final disposal of the suit where one of the parties has been restrained from carrying on its business activities but this Court cannot be oblivious of the fact that if a person has a right to do business of manufacturing or selling, the said person cannot trade in a manner which would earn him profits of the goodwill, labour and hard work put in by another person. 26. It is also pertinent to note that the questions which are arising in the suit are with regard to violation of trade mark and copyright of the plaintiffs. The matter also pertains to passing off. Here this Court is concerned with a medicinal product. One should not forget the fact that the rights given under the Trade Mark Act and Copyright Act are not only for protection of the rights of the registered owner of a copyright or a trade mark but these rights are also in the interest of general public so that they may not be misguided. A person desirous of purchasing a particular product manufactured by a particular person cannot be misguided or cheated by another manufacturer manufacturing a similar product. So, in addition to protection of rights of the dealers or manufacturers, the legislature would also like to protect an unwary and normal buyer. 27. The main objection which learned Advocate Shri R. R. Shah has raised is with regard to the jurisdiction of the City Civil Court, Ahmedabad. He has submitted that the City Civil Court, Ahmedabad, has no jurisdiction to entertain the suit, and therefore, the interlocutory order passed on the Notice of Motion is bad in law. He has cited several authorities to substantiate his submission. On the other hand, Sr. Advocate Shri K. S. Nanavati has submitted that the City Civil Court, Ahmedabad has jurisdiction to entertain the suit and the suit has been rightly entertained at this stage. It is his submission that the question of jurisdiction is a mixed question of law and facts. At the initial point of time, the Court has only to look at the averments made in the plaint and the documents annexed to the plaint to determine whether the Court entertaining the suit has jurisdiction. 28. The question with regard to jurisdiction is a mixed question of law and facts. There is an averment in the plaint that cough drops ‘VICAS’ manufactured by defendant No. 1 are being sold in Ahmedabad. It has also been averred in the plaint that ‘VICKS’ manufactured by the plaintiffs is also being sold everywhere in the country. Upon reading the said averments, *prima facie* it is clear that defendant No. 1 is selling ‘VICAS’ in Ahmedabad, and therefore, the City Civil Court, Ahmedabad, has jurisdiction to entertain the suit. It is pertinent to note that the Court Commissioner who was appointed by the City Civil Court had in fact found cough drops ‘VICAS’ being sold in Ahmedabad on 20-2-1999. This fact clearly denotes that the averment which has been made in the plaint with regard to sale of cough drops ‘VICAS’ in Ahmedabad is found to be correct. 29. In the instant case, the question with regard to jurisdiction cannot be gone into at this stage because the said question can be decided only after evidence is recorded. At this stage, the Court has only to see whether there is an averment with regard to jurisdiction in the plaint and in the related documents. Such a view has been taken by several High Courts and in the circumstances, I do not desire to reproduce all the citations. One such view has also been taken in the case of *P. M. Diesels Ltd. v. Patel Field Marshal Industries*, 198 PTC 18, which is reproduced hereinbelow:- “The jurisdiction of a Court does not depend upon the defence taken by a defendant and it is the allegations made in the plaint which decide the forum.” The Court, while considering an application for grant of temporary injunction can, however, go into the question whether *prima facie*, it has jurisdiction or not and for the said purpose not only the pleadings but the affidavits, documents and other material on record can be examined. Therefore, for the purposes of forming *prima facie* opinion the Court can travel beyond what is averred in the plaint." 30. A similar question had arisen in the case of *M/s. Richardson Vicks Inc. & Anr. v. Vicas Pharmaceuticals*, 1990 PTC 16 wherein it was held that as there was an averment in the plaint that the defendant’s goods were being sold in Delhi and as there was infringement of copyright, *prima facie* it cannot be said that the Court entertaining the suit in Delhi had no jurisdiction. 31. It is also pertinent to note that Sec. 20 of the C.P.C. also provides that when cause of action has arisen at several places, the suit can be entertained at any of the places where the cause of action had arisen. In the instant case, it is not in dispute that the Court Commissioner has found that the defendants were selling their cough drops under trade name ‘VICAS’ in Ahmedabad. 32. It is also pertinent to note that so far as question of jurisdiction is concerned, the Hon’ble Supreme Court has held in case *Koopilan Uncen’s daughter Pathumma & Ors. v. Koopilan Kutty*, AIR 1981 SC 1683, subject to fulfilment of certain other conditions, that even if a Court has exercised its jurisdiction not vested in it, the order passed by the Court should not be disturbed unless it is shown that the order had resulted into failure of justice. In the instant case, at the interlocutory stage, the defendants have been restrained from manufacturing and selling the product in question as *prima facie* it has been found that the defendants were violating statutory rights of the plaintiffs. Still evidence has not been adduced and *prima facie* it has been found that the products of the plaintiffs and defendants are sold in Ahmedabad. In the circumstances, *prima facie* it appears that the impugned order of the trial Court has not resulted into failure of justice. 33. Thus, for the reasons stated hereinabove, I come to the conclusion that at this stage it would be too early to say that the trial Court has no jurisdiction. Needless to say that after weighing the evidence and after hearing the concerned parties, the trial Court can come to a different conclusion at a later point of time but at this stage, in my opinion, it cannot be said that the trial Court has no jurisdiction. 34. Another submission of learned Advocate Shri R. R. Shah is with regard to non-maintainability of the suit because of joinder of several causes of action. It has been submitted by him that the suit is based on several different actions under the provisions of the Copyright Act and Trade Mark Act and it is also based on the action of passing off. Then the question which arises is whether a single suit is maintainable. 35. The normal principle is that multiplicity of litigation should be avoided by the litigants, and therefore, all causes of action which are pertaining to each other should be joined together in one civil suit. Even the C.P.C. provides for it. Order II Rule 3 of the C.P.C. clearly provides that a plaintiff may unite in the same suit several causes of action against the same defendant. Similar issue had arisen in the case of *M/s. Jay Industries v. M/s. Nakson Inds.*, 1992 PTC 94. In the said case, in the plaint the question with regard to violation of the copyright and trade mark had been alleged. Similar plea, as raised by learned Advocate Shri R. R. Shah, was raised in the said suit. After discussing the legal provisions it has been observed by the Division Bench of Delhi High Court consisting of B. N. Kirpal, J. (as he then was) and Ms. Santosh Duggal, J. as under:- “In the instant case, there is one plaintiff and one defendant. The two different causes of action in effect pertain to the same transaction. The allegation of the plaintiff is that the defendant is selling goods by mislabelling them and by infringing the trade mark and copyright of the plaintiff. The sale is alleged to be made in cartons similar to the ones in which the plaintiff had a copyright and it is further alleged that those cartons contain the trade mark which is registered in the plaintiff’s name. A single transaction of sale by the defendant, in effect, results in the infringement of both the trade mark and copyright of the plaintiff.” 36. Looking to the abovereferred legal position, in my opinion, in the instant case also, it cannot be said that the suit is vitiated on the ground of misjoinder of causes. So as to avoid multiplicity of litigation and proceedings which result into delay and burden on the Courts and the litigants, in my opinion, it is advisable to file a common suit as per the provisions of Order II Rule 3 of the C.P.C. The said provision clearly contemplates joinder of causes of action. Thus, the argument with regard to misjoinder of causes of action does not appear to be just and proper. In the circumstances, I hold that the validity of the suit is not vitiated on the ground of misjoinder of causes. 37. Learned Advocate Shri Shah has made several super-technical submissions with regard to signing of the plaint, carrying out of the amendment and signing of certain documents only by one of the plaintiffs. I consider these arguments and submissions to be of super-technical nature. It is a well settled legal position that as far as possible, no proceeding in a Court of Law should be allowed to be defeated on mere technicalities because all rules of procedure are intended to advance justice and not to defeat it. The Hon’ble Supreme Court has observed in case of *Sangram Singh v. Election Tribunal, Kotah*, AIR 1955 SC 425 that - “Now a code of procedure must be regarded as such. It is ‘procedure’, something designed to facilitate justice and further its ends; not a penal enactment for punishment and penalties; not a thing designed to trip people up. … … …” 38. The above-referred well established legal position clearly reveals that the Court has to give more importance to the substance and not to the procedural law while administering justice. Signing here or there with or without permission of the Court in the matter of amending the plaint or in the matter of signing application are all procedural aspects. Everywhere, at least one of the plaintiffs has signed. The question with regard to legal right of the person signing the plaint on behalf of the plaintiffs cannot be entertained at such an interlocutory stage. It is not in dispute that the signatory to the plaint was authorised by the plaintiffs to file the suit. The objection is to the effect that the fine details to be mentioned in the plaint were not stated in the resolution whereby the signatory to the plaint namely Mr. Deepak Acharya was empowered to file the suit. 39. It appears that the defendants want to put much reliance on the technicalities. One has to look at the substance in the case rather than going to such super-technical details at an interlocutory stage. In the circumstances, I do not think it proper to entertain the objections with regard to signatures of the plaintiffs etc. at this interlocutory stage. 40. Having considered the fact that the Court has jurisdiction and the suit is maintainable, one has to look at the sum and substance of the allegations made in the plaint. The grievance of the plaintiffs in a nutshell is that rights of the plaintiffs under the Trade Mark Act and Copyright Act have been violated and the defendants are passing off their goods i.e. their cough drops named ‘VICAS’ as ‘VICKS’. I am conscious of the fact that still the parties have not led the evidence. *Prima facie*, one has to see whether the goods manufactured and sold by the defendants are deceptively similar to the goods manufactured and sold by the plaintiffs. The law on the subject has been settled since long and merely by looking at some of the judgments one can find out whether the goods manufactured by the defendants can be said to be deceptively similar to those of the plaintiffs. It is a well established principle that for the purpose of ascertaining whether the goods are deceptively similar, one has not to look at the goods like a meticulous or a methodical person, who is having excellent or photogenic memory and who makes a comparison every time when he purchases goods. In such a case, one has to look at an average person with average memory and imperfect recollection. In the instant case, the two products with which the Court is concerned are ‘VICKS’ and ‘VICAS’. It has been submitted by learned Advocate Shri R. R. Shah for the defendants that phonetically both words are different. It is true that spelling of both words are different and phonetically also there is a difference between the two. The Hon’ble Supreme Court had an occasion to determine such an issue in the case *Corn Products Refining Company v. Shangrila Food Products Ltd.*, AIR 1960 SC 142. Looking to the law laid down by the Hon’ble Supreme Court in the case of *Corn Products Refining Co.* (supra) and the observations made by the Hon’ble Supreme Court, one can clearly say that the phonetic difference which learned Advocate Shri R. R. Shah is referring to is not of much importance as we are concerned with a person of average intelligence and imperfect recollection who is likely to err while making a decision with regard to purchase of cough drops. In the instant case, one has to see whether the product of the defendants is likely to cause confusion. The defendants might not be having any intention to deceive a customer but the product of the defendants should not be such which would even cause confusion in the mind of a buyer of average intelligence and imperfect recollection. In case of *Parkar-Knoll Ltd. V. Knoll International Ltd.*, 1962 RPC 265, Lord Denning has explained meaning of the words “to cause confusion” in a very succinct manner as under:- [Reproduction from GLROnLine] © Copyright with Gujarat Law Reporter Office, Ahmedabad “Secondly, “to deceive” is one thing. To “cause confusion” is another. The difference is this: When you deceive a man, you tell him a lie. You make a false representation to him and thereby cause him to believe a thing to be true which is false. You may not do it knowingly, or intentionally, but still you do it, and so you deceive him. But you may cause confusion without telling him a lie at all, and without making any false representation to him. You may indeed tell him the truth, the whole truth and nothing but the truth, but still you may cause confusion in his mind, not by any fault of yours, but because he has not the knowledge or ability to distinguish it from the other pieces of truth known to him or because he may not even take the trouble to do so.” 41. In the instant case, upon perusal of both the products, one may get confused as names of the products and get up under which they are being sold are deceptively similar and they would surely cause confusion. As the Hon’ble Supreme Court has observed in case *Corn Products Refining Co.* (supra), one has to look at the first impression which a person would have upon seeing the products. Upon the first impression, as observed by the trial Court, a customer is likely to be confused on account of phonetic similarity and similarity in the get up of both the products. 42. There is one more important thing in this case. The product is a medicinal product. Though it is a medicinal product, it is not covered under the provisions of Drugs and Cosmetics Act. It can be sold over the counter without any prescription. In my opinion, when the Court is concerned with any medicinal product, the Court has to be more cautious for the reason that an average person with average intelligence and imperfect recollection should not be misguided and should not, due to some mistake or due to inadvertence, purchase another product. A similar question had arisen in case of ‘calmpose’ and ‘calmprose’ in the case of *Ranbaxy Laboratories v. Dua Pharmaceutical Ltd.*, AIR 1989 Delhi 44. The drug which was subject-matter of the said litigation was a scheduled drug and it was not open to a consumer to get the drug without prescription of a doctor. It was the case of the defendant in that case that as the drug was a scheduled drug, chances of making a mistake or having a confusion were not there. Dealing with the said argument, the Court had observed as under: “…It is true that the said drugs are supposed to be sold on doctor’s prescription, but it is not unknown that the same are also available across the counters in the shops of various chemists. It is also not unknown that the chemists who may not have ‘CALMPOSE’ may pass off the medicine ‘CALMPROSE’ to an unwary purchaser as the medicine prepared by the plaintiff. *The test to be adopted is not the knowledge of the doctor, who is giving the prescription. The test to be adopted is whether the unwary customer who goes to purchase the medicine can make a mistake.*” (Emphasis supplied). 43. Again, in the said case also, the Court had come to a conclusion that one has to look at an unwary customer and even if the drug is a scheduled drug, possibilities of confusion cannot be ruled out. In the light of the observations made in the judgment in case of *Ranbaxy Laboratories Ltd.* (supra), the case of the plaintiffs becomes stronger for the reason that the sachet of cough drops with which we are concerned at present is not a scheduled drug and any person can have it on the counter without prescription of a doctor, and therefore, chances of having confusion in the mind of an unwary customer are substantially more. 44. It is pertinent to note that both the products were produced before the trial Court and before this Court. Upon perusal of both the products, the first impression which the learned Judge of the trial Court had was that the product manufactured by defendant No. 1 is deceptively similar to the one which is manufactured by the plaintiffs. The mode in which the words ‘VICKS’ and ‘VICAS’ have been written and the get up in which they are being sold are quite similar. So as to ascertain whether the products are deceptively similar, one has not to have careful examination of both the products or to compare them by keeping them side by side, but one has to be guided by common sense and general observation. If, at the first sight, both the products appear to be similar, one can very well say that there is an element of deceptive similarity in the products. 45. Having *prima facie* come to a conclusion that the product of defendant No. 1 is deceptively similar to that of the product of the plaintiffs, the question is now with regard to grant of injunction. 46. The Court has to consider whether injunction should be granted in favour of the plaintiffs so as to restrain the defendants from manufacturing the product or to ask the defendants to maintain accounts so that ultimately in the event of the defendants failing in the suit, the Court can adequately compensate the plaintiffs. 47. For the purpose of deciding whether injunction should be granted in such cases, one has to look at the question of balance of convenience. In the instant case, the plaintiffs are in the business of manufacturing products under trade name ‘VICKS’ for several years and in several countries and there is material on record to show that substantial amount has been spent by the plaintiffs for the purpose of making the product popular and for enhancing sale of the product. 48. Once the plaintiff establishes a *prima facie* case and if the balance of convenience is in favour of the plaintiff, the Court can very well assume that irreparable injury would follow if *ad-interim* injunction is not granted in favour of the plaintiff. This is the normal sound principle which the Court follows in the matter of grant of injunction. Applying the said principle to the present case, it is very clear that a *prima facie* case has been established by the plaintiffs because the product which the defendants are selling is deceptively similar to the one which is manufactured and sold by the plaintiffs. Upon perusal of the record available to the Court, *prima facie*, it appears that the plaintiffs have acquired a very good reputation and its products under the name ‘VICKS’ are being sold not only in the country but also elsewhere. Looking to the said fact, in my opinion, balance of convenience would surely tilt in favour of the plaintiffs. In the case, where medicinal products are being sold, which are deceptively similar, harm would not only be caused to the plaintiffs but it would also be caused to innocent consumers who, as a result of confusion, might purchase a medicinal product prepared by another person which they in fact never wanted to buy. In the instant case, it has also been submitted by the learned Advocates that ingredients of both the products *i.e.* the product manufactured by the plaintiffs and defendant No. 1 are different. If a consumer having an intention to have ingredients of product A, buys product B having different ingredients, it would not be in the interest of the consumer as he would be consuming medicines which are absolutely different than the one which he wanted to consume. In my opinion, in case of medicinal products, the Court has to be more cautious and has to grant injunction where the Court feels that an innocent and unwary consumer is likely to have some confusion while identifying the product which he would like to purchase. In the matter of grant of injunction, the Court has also to see whether non-interference of the Court would result into irreparable injury to the party seeking the injunction. If the Court comes to a conclusion that there is no other remedy available to the party except the one with regard to injunction, the plaintiff seeking injunction should be suitably protected so that the apprehended injury may not be caused to the plaintiff. 49. It has been observed by the Hon’ble Supreme Court in *Wander Ltd. v. Antox India P. Ltd.*, 1990 Supp. SCC 727 that appellate Court should not interfere with the exercise of discretion of the Court of the first instance and substitute its own discretion except when it finds that the discretion was used by the Court of the first instance in an arbitrary or capricious or perverse manner or it had ignored the settled principles of law regarding grant or refusal of interlocutory injunction. In the instant case, in my opinion, the trial Court has not committed any error or has not ignored any of the settled principles governing grant of interlocutory injunction, and therefore, I do not think that this Court should interfere with the discretion used by the trial Court. 50. In the instant case, the plaintiffs have satisfied the Court that there is a *prima facie* case in favour of the plaintiffs. The balance of convenience is also in favour of the plaintiffs, and as stated hereinabove, the product is a medicinal product, and therefore, it would be also in the interest of the consumers, if the defendants are restrained from manufacturing the product which is deceptively similar to the one which is manufactured and sold by the plaintiffs. In the instant case, the position of the plaintiffs is surely superior to that of the defendants, and therefore, one can have no hesitation in saying that the balance of convenience is definitely in favour of the plaintiffs. When the defendants knowingly manufacture a product under trade name ‘VICAS’ having get up similar to that of ‘VICKS’ manufactured by the plaintiffs, the Court should not hesitate in grant of injunction in favour of the plaintiffs so as to prevent damage being caused to the plaintiffs. Moreover, looking to the principles laid down by the Hon’ble Supreme Court in the case of *N. R. Dongre v. Whirlpool Corporation*, 1996 (5) SCC 714, the trial Court has rightly protected the plaintiffs by granting the injunction in their favour and I do not see any reason to interfere with the impugned interlocutory order passed by the trial Court, and therefore, the appeal is dismissed with no order as to costs. 51. Looking to the facts of the case, and in view of the fact that the injunction is operating against the defendants, it is hoped that the trial Court shall give priority to the suit and shall finally dispose of the same as soon as possible and preferably before 30-9-2000. (SBS) Appeal dismissed. * * * SPECIAL CIVIL APPLICATION Before the Hon'ble Mr. Justice M. S. Shah RAMABEN PANUBHAI PATEL THROUGH P.O.A. MANOJBHAI BHALCHANDRA JERMANWALA & ANR. v. M. B. PARMAR & ORS.* Registration Act, 1908 (XVI of 1908) — Secs. 5, 23, 32, 33, 34, 35 & 50 — Where a document is presented for registration within the specified time-limit before a sub-registrar of another sub-district, levying a huge amount as penalty is unfair and unjust — Penalty ordered to be waived. A conspectus of the statutory provisions of the Registration Act clearly reveals that the real purpose of registration procedure is to ensure that the document being presented for registration is executed by the person who represents himself or herself to be the executant or such a representation may be made by representative, assign or agent of the executant. (Para 6) In the instant case there is no dispute about the fact that petitioner No. 1 had herself appeared before the Sub-Registrar on 19-1-1998 and that petitioner No. 1 had also executed the kabulatnama under Sec. 58 of the Act. However, the document was still not registered only on the ground that the vendor and vendee appeared before the Sub-Registrar-(I) whose jurisdiction was different and the property fell within the jurisdiction of Sub-Registrar-(VII) for Rajpur-Hirpur area. It is also not in dispute that till 4-12-1997 all the documents were being presented for registration before Sub-Registrar-(I) at Gheekanta where the office of the Registrar of Documents is situate and that it was only from 4-12-1997 that the Sub-Registrars were appointed territory-wise meaning thereby that the office of Sub-Registrars for each registration in Ahmedabad City was shifted to the individual sub-district that is how the office of Sub-Registrar-(VII) for Rajpur-Hirpur area was shifted from Gheekanta to Odhav. The change had come very recently, and therefore, the vendor and vendee were not aware of this change in the territorial distribution of work amongst the Sub-Registrars. (Para 7) Keeping these factors in mind, this Court while setting aside the orders at Annexures ‘A’ and ‘C’ directs the authorities not to levy any penalty for late presentation of the Sale Deed in question before Sub-Registrar-(VII) because both the vendor as well as vendee had themselves personally presented the Sale Deed before the Sub-Registrar on 19-1-1998 and Sub-Registrar-(I) had accepted the registration fees of Rs. 25,000/- and that for a period of two months the petitioners were not informed that the document was not required to be presented to him, but was required to be presented before Sub-Registrar-(VII). (Para 13) *Decided on 28-7-2000. Special Civil Application No. 1655 of 1999 and Civil Application No. 12240 of 1999.*
Phospholipids for human wellbeing Novastell is specialized in phospholipid and omega 3 fatty acid ingredients dedicated to functional and nutritional applications. Created in 2006, Novastell is installed in Normandy with a fast and easy access both to the west coast of France and Paris. Since its creation, Novastell has developed an international distributor network and is actively present in the most important tradeshows in the world. Novastell offers functional lecithins for the food industry, from the classical liquid soya lecithin to most elaborated products like sunflower, rapeseed and egg lecithins, both in liquid and powdered forms. Hydrolyzed lecithins are also available, used as emulsifying, anti-sticking and anti-spattering agents. Hydrogenated lecithins are proposed for cosmetic applications. Thanks to its development laboratory and facilities, Novastell is able to perform formulation tests and personalized blends. Novastell also develops innovative products for nutritional applications. Phospholipids are now recognized as essential components of living cells. They represent the building blocks of the membranes which not only isolate the cell interior media from its outer environment but also regulate its properties. The most recent biological research highlights the involvement of the phospholipids in our physiological and behavioural functioning. Beside their role in the cell membranes, phospholipids are also the most effective carriers for fatty acids. Through its range of nutritional ingredients, Novastell promotes the simultaneous use of two specific nutrients: phospholipids and the omega 3 fatty acid DHA (docosahexaenoic acid), in the form of phospholipid – DHA1 complex. Based on updated scientific research and a pragmatic development activity, Novastell proposes a complete range of active phospholipid ingredients which can be incorporated in the largest panel of formulations and applications. Fractionated phospholipids: phosphatidylcholine enriched fractions Basically, lecithins are mixes of several phospholipids embedded in an oily phase. Phospholipids can be isolated and then fractionated to obtain products enriched in one specific component. Novastell offers phosphatidylcholine enriched fractions from soya or egg lecithins. The physiological activity of phosphatidylcholine is mainly dedicated to liver protection. It had been shown that phosphatidylcholine is effective in protecting liver from damages induced by an elevated consumption of alcohol. It also improves the liver status of patients suffering from chronic hepatitis. Phosphatidylserine, the brain phospholipid The central nervous system is the organ with the higher lipid content of the whole body in mammals. Almost 50% of these lipids are phospholipids, 15% of them being phosphatidylserine. The phosphatidylserine content of brain decreases with age and this decrease seems to be concomitant with the alteration of memory and learning performances. It has been shown that a dietary supply of phosphatidylserine restores the brain content and helps to slow down age-related learning and memory impairments. This particular phospholipid is almost absent from vegetal sources of phospholipids and was primarily extracted from bovine brains. Novastell proposes a range of phosphatidylserine from pure vegetal origin: the LIPO-PS range (20 to 70% phosphatidylserine, fluid or powdered) from soya origin, and recently LIPOSUN-PS from sunflower origin with a 70% phosphatidylserine content. Phosphatidylserine is produced from other sources of phospholipids, rich in phosphatidylcholine, by an enzymatic modification. The enzyme itself, phospholipase D, is extracted from cabbage while most of the other processes use bacterial enzyme. More recently, phosphatidylserine has also been found to improve sports performance and recovery after physical training. This effect can be seen at elevated concentrations. When given to sportsmen, phosphatidylserine has shown positive effects on resistance to effort: longer physical exercise before exhaustion and increase in VO₂ max. The availability of a concentrated, vegetal and secured source of phosphatidylserine allows the development of totally new products dedicated to sports performance. Phospholipids from eggs with a specific fatty acid composition Novastell has an original range of egg phospholipids specifically enriched in the long-chain omega 3 fatty acid, DHA. DHA concentrations range from 2 to 10% of total fatty acids. DHA is incorporated in eggs via the hens’ food. Egg phospholipids of Novastell are designed for human nutrition, from infant milk to memory and vision applications. | Origin | Soya | Sunflower | Rapeseed | |-----------------|------|-----------|----------| | Phosphatidylcholine | 15 | 16 | 17 | | Phosphatidylethanolamine | 10 | 8 | 9 | | Phosphatidylinositol | 10 | 13 | 10 | | Phosphatidylserine | < 1 | < 1 | < 1 | | Phosphatidic acid | 4 | 3 | 4 | Typical phospholipid compositions of various vegetal lecithins (weight percent) With a unique ingredient, it is possible to supply three essential nutrients to nervous cells: phosphatidylcholine, phosphatidylethanolamine and DHA. These three nutrients are linked in the form of [phosphatidylcholine - DHA] and [phosphatidylethanolamine - DHA] complex called [GPL-DHA]. **The next generation of marine phospholipids** In collaboration with Arctic Nutrition, Novastell has launched a new marine phospholipids extract prepared from fish eggs. This natural resource is based on the valorization of herring roe which is a fishing industry by-product under-valued before. Herrings are caught for food purposes. The phospholipids are extracted with an alcohol-based process from the fish roe. This doesn’t necessitate any additional fishing effort and preserves marine resources. Herring roe phospholipids are the Novastell’s product Lecicaviar Arctic which represents a sustainable alternative to krill oil. Lecicaviar Arctic F50 is the most concentrated form of Lecicaviar. It has a waxy consistency, and contains more than 50% of phospholipids. Lecicaviar F50 can be diluted with fish oil to obtain liquid extracts such as Lecicaviar Arctic F30 in a standard version containing 14% DHA, and a premium version containing 28% DHA. The content in phospholipids of Lecicaviar F50 is standardized to 28%. Recently, a powdered presentation has been developed. Dried on a carrier, LIPOMEGA DHA contains 17% minimum of total phospholipids. It allows the use of a marine source of phospholipids in dried formulations like powder sticks or tablets. Lecicaviar Arctic contains twice more DHA than EPA making it particularly well suited for brain applications. Docosahexaenoic acid or DHA is the longer and most unsaturated fatty acid of the omega 3 family and is the only omega 3 fatty acid present in high levels in human cell membranes. It is considered to be essential for an optimal brain functioning and must be in majority provided by food. There are some periods of life particularly concerned by a sufficient supply in DHA, during pregnancy and first months of life when the nervous system is at its maximal growth rate, and during aging where the brain content in DHA decreases and dietary supply becomes insufficient. Unlike fish oils, Lecicaviar Arctic provides DHA as [phospholipid-DHA]. Phospholipids have been demonstrated to be the best carriers for omega 3 fatty acids. In the case of DHA, they are also the best way to target DHA to brain and allow it to be incorporated in nervous cells membranes where it will exert its positive effects. The term bioaccretion, developed by Novastell, is used to describe this targeted and optimized transport of DHA in cell membranes when supplied as [phospholipid-DHA] complex. Numerous studies have demonstrated that an insufficient omega 3 fatty acid dietary supply during the fetal development and after birth leads to learning and memory performance impairments. It is also well known that DHA content of the brain diminished during aging. An increased nutritional supply of DHA is able to restore not only the brain content in DHA but also its phosphatidylserine content. At the same time, an increased nutritional supply in DHA slows down age-related cognitive impairments. These observations emphasize the importance of DHA for brain functioning, and also how much the nervous system relies on food for its supply in DHA. DHA and its physiologically optimized form of [phospholipid-DHA] in Lecicaviar are also active against inflammation-related skin problems. Striking results have been obtained after a few months of a marine phospholipids treatment. | Origin | Hen egg | Fish roe | |------------------------|---------|----------| | Phosphatidylcholine | 69 | 42 | | Phosphatidylethanolamine | 18 | 10 | | Phosphatidylinositol | < 1 | --- | | Phosphatidylserine | < 1 | --- | | Phosphatidic acid | --- | --- | | Other phospholipids | 3 | 2 | Typical phospholipid compositions of purified egg and marine lecithins (weight per cent). **Beyond active ingredients: semi-finished products** Novastell offers active ingredients with various physical presentations adapted to the variety of food supplements available. Novastell’s ingredients are proposed in liquid, powdered and granulated forms. Novastell can also offer these ingredients as original formulations in the form of semi-finished products ready to be packed and marketed. According to the formulation, small powder bags, tablets, capsules and soft gel capsules can be provided. **The synergy of actives with the LEA range: 1+1=3** The products of the LEA range associate phospholipids and DHA in the same formulation to optimize their respective efficacy. Their synergistic effects are promoted in 4 formulations distributed as bulk or semi-finished products. Each LEA product is dedicated to a specific target and is stemming from a complete scientific study. LEA® is for “Les Essentiels Associés”. The 4 existing products are: - **Brain Synergy** Helps to prevent cognitive decline, boosts concentration and memory. - **Stress Synergy** Helps stress management through metabolic actions. The Stress Synergy formulation is patented. - **Performances Synergy** Helps athletes to improve their performances, their stamina and recovery. - **Eye Synergy** Helps to maintain vision and to prevent degenerative eye disease. Helps to fight against dry eye. Novastell can also offer 100% vegan versions of LEA products. **Manager and business contact:** M. Pierre Lebourd Society: NOVASTELL Z.I. de la Porte Rouge – 27150 Étrépagny France Tel : +33 (0)2 32 55 65 40 Fax : +33 (0)2 32 27 26 57 Email: email@example.com www.novastell.com
Incorporating spatial heterogeneity created by permafrost thaw into a landscape carbon estimate E. F. Belshe,¹ E. A. G. Schuur,² B. M. Bolker,² and R. Bracho¹ Received 15 August 2011; revised 22 December 2011; accepted 5 January 2012; published 2 March 2012. [1] The future carbon balance of high-latitude ecosystems is dependent on the sensitivity of biological processes (photosynthesis and respiration) to the physical changes occurring with permafrost thaw. Predicting C exchange in these ecosystems is difficult because the thawing of permafrost is a heterogeneous process that creates a complex landscape. We measured net ecosystem exchange of C using eddy covariance (EC) in a tundra landscape visibly undergoing thaw during two 6 month campaigns in 2008 and 2009. We developed a spatially explicit quantitative metric of permafrost thaw based on variation in microtopography and incorporated it into an EC carbon flux estimate using a generalized additive model (GAM). This model allowed us to make predictions about C exchange for the landscape as a whole and for specific landscape patches throughout the continuum of permafrost thaw and ground subsidence. During June through November 2008, the GAM predicted that the landscape on average took up 337.1 g C m⁻² via photosynthesis and released 289.5 g C m⁻² via respiration, resulting in a net C gain of 47.5 g C m⁻² by the tundra ecosystem. During April through October 2009, the landscape on average took up 498.7 g C m⁻² and released 410.3 g C m⁻², resulting in a net C gain of 87.8 g C m⁻². On average, between the years, areas with the highest permafrost thaw and ground subsidence photosynthesized 17.7% more and respired 3.3% more C than the average landscape. Areas with the least thaw and subsidence photosynthesized 15% less and respired 5.1% less than the landscape on average. By incorporating spatial variation into the EC C estimate, we were able to estimate the C balance of a heterogeneous landscape and determine the collective effect of permafrost thaw on the plant and soil processes that drive ecosystem C flux. In these study years, permafrost thaw appeared to increase the amplitude of the C cycle by stimulating both C release and sequestration, while the ecosystem remained a C sink at the landscape scale. Citation: Belshe, E. F., E. A. G. Schuur, B. M. Bolker, and R. Bracho (2012), Incorporating spatial heterogeneity created by permafrost thaw into a landscape carbon estimate, J. Geophys. Res., 117, G01026, doi:10.1029/2011JG001836. 1. Introduction [2] Northern high latitudes are disproportionally warming and arctic temperatures are predicted to increase by 6.5°C or more by the year 2100 in response to radiative forcing caused by increasing greenhouse gases and changes in albedo [Chapin et al., 2000, 2005; Hinzman et al., 2005; Intergovernmental Panel on Climate Change, 2007]. Currently, permafrost occurs within 24% of the ice-free land area in the northern hemisphere [Zhang et al., 1999], and it is estimated 25%–90% will degrade into seasonally frozen ground by the year 2100 [Anisimov and Nelson, 1996; Lawrence et al., 2008; Saito et al., 2007]. According to recent estimates, permafrost soils contain twice as much carbon (1672 Pg) as the entire atmospheric pool [Schuur et al., 2008; Tarnocai et al., 2009]. If a portion of this C is released to the atmosphere it could result in a strong positive feedback to climate change. Understanding how permafrost thaw affects the rate of C exchange from this large pool is essential for understanding the global C cycle in a warmer world. [3] Thawing of permafrost is a temporally dynamic and spatially heterogeneous process. Rising temperatures increase active layer (seasonally thawed surface layer) thickness and form thermokarst [Jorgenson and Osterkamp, 2005; Zhang et al., 2005]. Thermokarst is uneven ground that forms when ice-rich permafrost thaws, drainage occurs, and the ground surface subsides [Jorgenson and Osterkamp, 2005]. These localized changes in surface relief greatly alter the surface hydrology of the area. As water is redistributed from higher to lower microtopographical areas, thermal erosion by the movement of water warms the soil and further perpetuates permafrost thawing [Kane et al., 2001; Osterkamp et al., 2009]. This positive feedback creates a mosaic of patches that range from high, dry embankments with shallow active layers to subsided areas with relatively wet, warm soils and deep active layers [Lee et al., 2011; Osterkamp et al., 2009; Vogel et al., 2009]. Furthermore, this pattern of ground subsidence, dictated by the initial presence of ice-rich permafrost, is interspersed throughout the landscape and ultimately creates a mosaic of various degrees of permafrost thaw and microtopography. [4] The future C balance of high-latitude ecosystems depends on the sensitivity of biological processes (photosynthesis and respiration) to the physical changes in temperature and moisture occurring with permafrost thaw. But, predicting C exchange in these ecosystems is difficult because of the landscape heterogeneity created as permafrost thaws. Since adjacent patches can have very different physical environments, they can have very different gross primary production (GPP) and ecosystem respiration \((R_{\text{eco}})\) [Lee et al., 2011; Vogel et al., 2009]. Landscape-scale GPP and \(R_{\text{eco}}\) will depend on the cumulative response of the landscape to permafrost thaw, which in turn will dictate the direction and magnitude of net ecosystem exchange (NEE = GPP – \(R_{\text{eco}}\)). [5] The response of the C cycle to spatial and temporal environmental variation is often nonlinear and not simply described by the mean response [Aubinet et al., 2002]. Therefore, an appropriate understanding of both the spatial and temporal variation of C flux is essential for estimating the C balance of a landscape. But the intensive temporal sampling required for good estimates of C flux makes it difficult to obtain extensive, spatially explicit C flux data. Eddy covariance (EC) provides a method to directly measure C exchange at a high level of temporal resolution over a large spatial scale [Baldocchi, 2003]. But fluxes measured by EC are commonly assumed to come from a homogenous surface, which makes it difficult to resolve the cumulative contribution of localized features in the landscape to an EC estimate [Laine et al., 2006; Schmid and Lloyd, 1999]. Although much effort has gone into developing and using models to locate where fluxes originate (i.e., the footprint of an EC tower) [Kormann and Meixner, 2001; Schmid, 1997, 2002; Schmid and Lloyd, 1999], less effort has gone into incorporating spatial information back into EC C estimates. However, the abundance of data produced by EC towers gives us the ability to explore spatial patterns of C flux. [6] In this study, we use generalized additive models (GAMs) to generate a continuous time series of NEE for a tundra landscape undergoing permafrost thaw. We developed a spatially explicit quantitative metric of permafrost thaw based on variation in microtopography. By incorporating our spatial metric into EC gap-filling models, we were able to make C flux predictions for the landscape as a whole, as well as for specific landscape patches throughout the continuum of permafrost thaw and ground subsidence. We tested the robustness of our models against more widely used (nonspatial) gap-filling methods. Our objectives were to more accurately estimate the C balance of a heterogeneous landscape and to explore the collective effect of permafrost thaw on the plant and soil processes that dictate ecosystem C exchange. 2. Material and Methods 2.1. Site Description [7] The study site is within the Eight Mile Lake (63°52′42″N, 149°15′12″W), watershed in the northern foothills of the Alaska Range near Denali National Park and Preserve [Schuur et al., 2007, 2009]. This upland area occurs within a vulnerable band of discontinuous permafrost near the point of thaw due to the combination of its elevation and geographic position [Romanovsky et al., 2007; Yocum et al., 2006]. Deep permafrost temperature has been measured at the site since 1985 and during this time thermokarst terrain has developed and expanded as the permafrost has warmed [Osterkamp et al., 2009]. Vegetation at the site is dominated by moist acidic tussock tundra comprising sedge (*Eriophorum vaginatum*), deciduous and evergreen shrubs (*Vaccinium uliginosum*, *Rubus chamaemorus*, *Betula nana*, and *Ledum palustre*), and nonvascular plants (*Sphagnum* spp., *Dicranum* spp., feathermoss, and lichens). Soils at the site are classified as Gelisols because permafrost is found within 1 m of the soil surface [Soil Survey Staff, 1999]. An organic horizon, 0.45–0.65 m thick, covers cryoturbated mineral soil that is a mixture of glacial till (small stones and cobbles) and windblown loess. Organic C pools in the top meter of soil range between 55 and 69 kg C m\(^{-2}\) [Hicks Pries et al., 2011]. [8] The long-term mean annual air temperature (1976–2009) of the area is −1.0°C and the growing season (May–September) mean air temperature is 11.2°C, with monthly averages ranging from −16°C in December to +15°C in July. The long-term annual mean precipitation is 378 mm with a growing season mean precipitation of 245 mm (National Climatic Data Center, National Oceanic and Atmospheric Administration). Mean growing season air temperature was 8.1°C and 9.7°C during 2008 and 2009, respectively, and growing season precipitation was 346 mm and 178 mm during 2008 and 2009, respectively. 2.2. Eddy Covariance Measurements [9] NEE was measured using eddy covariance (EC) from June to December 2008 and April to October 2009. The EC system consisted of a CSAT3 sonic anemometer (Campbell Scientific, Logan, Utah) and an open path CO\(_2\)/H\(_2\)O gas analyzer (Li-7500, LI-COR Biosciences, Lincoln, Nebraska) mounted on a 2 m tower. Data were recorded at a frequency of 10 Hz on a CR5000 data logger (Campbell Scientific), and fluxes were Reynolds averaged over 30 min time periods [Reynolds, 1895]. Calibration was preformed monthly during the growing season using a zero CO\(_2\) air source, a ±1% standard CO\(_2\) concentration, and a dew point generator (Li-610, LI-COR Biosciences) for water vapor. The EC tower was placed within a patchy landscape consisting of visibly subsided areas to the North and West and relatively even terrain to the South and East. The fetch from the tower was greater than 300 m in all directions and winds predominantly came from the NE and SW. An analytical footprint model developed by Kormann and Meixner [2001] showed on average 50% of fluxes originated within the first 50 m around the tower, and greater than 80% of fluxes originated within 200 m from the tower. ### 2.2.1. EC Data Handling [10] Raw CO$_2$ fluxes were corrected for damping of high-frequency fluctuations, sensor separation, and misalignment of wind sensors with respect to the local streamline [Aubinet et al., 1999; Moncrieff et al., 1997; Wilczak et al., 2001]. CO$_2$ fluxes were then corrected for variations in air density due to fluctuation in water vapor and heat fluxes [Webb et al., 1980] and for fluctuations caused by surface heat exchange from the open path sensor during wintertime conditions [Burba et al., 2008]. Data screening was applied to eliminate half-hourly fluxes with systematic errors and nonrelevant environmental influences such as (1) incomplete half-hour data sets as a result of system calibration or maintenance; (2) time periods when the canopy was poorly coupled with the external atmospheric conditions as defined by the friction velocity, $u^*$ (threshold <0.12 m s$^{-1}$) [Clark et al., 1999; Goulden et al., 1996]; and (3) excessive variation from the half-hourly mean based on an analysis of standard deviations for $u$, $v$, and $w$ wind statistics and CO$_2$ fluxes. Fluxes were then divided into weekly data sets for both day and night conditions and unrealistic low or high values (>2 standard deviations from the mean) were filtered out. In total, ecosystem fluxes were measured 72% and 96% of the time during the 2008 and 2009 campaigns, respectively, while 64% and 60% of those values were eliminated by the screening criteria listed above. The quality of our data was evaluated by the degree of growing season energy closure ($R_{\text{net}} = LE + H + G$), which was 76% in 2008 and 73% in 2009. Ground heat flux ($G$) was estimated as the change in soil temperature with depth plus soil heat storage [Liebethal et al., 2005; Liebethal and Foken, 2007]. To calculate soil heat storage, we assumed 40% organic matter content [Hicks Pries et al., 2011] and 60% volumetric water content based on soil cores taken from the site. Measurements of half-hour NEE were calculated as: NEE = $F_{CO_2} + F_s$, where $F_{CO_2}$ was the mean flux of CO$_2$ at measurement height and $F_s$ was the half-hour change in CO$_2$ stored below measurement height. Because of the short vegetation (~30 cm), we calculated the change in CO$_2$ storage by taking the difference in successive CO$_2$ measurements at the measurement height [Hollinger et al., 1994]. We used the meteorological convention that positive NEE represents a transfer of CO$_2$ from the ecosystem to the atmosphere. ### 2.2.2. Environmental Measurements [11] Standard meteorological data were collected on a tower adjacent to the EC tower, including photosynthetic photon flux density (PPFD; Li-190SA, LI-COR Biosciences), incident radiation (Li-200SA, LI-COR Biosciences), net radiation (REBS Q*7.1, REBS Inc., Seattle, Washington), relative humidity and air temperature (Vaisala HMP45c, Campbell Scientific), and wind speed and direction (RM Young 3001, Campbell Scientific). Soil temperature profiles (5, 10, 15, 20, and 25 cm from surface) were measured with constantan-copper thermocouples and a thermistor (at 5 cm depth only; 107, Campbell Scientific). Moisture integrated over the top ~15 cm of soil was measured with a Campbell CS615 water content reflectometer. All measurements were recorded at half-hour average intervals with a CR5000 data logger (Campbell Scientific). A complete replicate set of micrometeorological measurements were collected at a tower 100 m to the NW of EC tower, and were used to interpolate gaps in micrometeorological data measured at the EC tower. ### 2.3. Landscape Properties [12] To quantify the amount and distribution of land surface subsidence associated with permafrost thaw, a digital elevation model (DEM) was created from point measurements of elevation. Fine-scale differences in elevation were measured with a high-resolution differential global position system (dgPS). One GPS unit (Trimble 5400) was placed at a nearby USGS geodetic marker (WGS84, 63°53'16.56"N, 149°14'17.92"W), which acted as the reference receiver. Using a second GPS unit (Trimble 5400) secured to a backpack, a kinematic survey was conducted by walking transects within a 400 m diameter circle encompassing the EC tower footprint. Geographic position and elevation were collected at 5 s intervals, yielding a total of 7220 points. These data were postprocessed with methodology developed by UNAVCO using Trimble Geomatics Office (Dayton, Ohio). [13] To create the DEM of the area surrounding the EC tower, spherical models were fit to empirical semivariograms, and ordinary kriging was used to interpolate between point measurements using the calculated range of 282 m, a nugget of 0.02, and a partial sill of 0.47. Variogram analysis and kriging was done with the Geostatistical analyst extension in ArcGIS 9.3. Because the study site is on a gentle slope (~5%), the original DEM was corrected for overall slope elevation changes, so we could decipher small-scale subsidence features. To correct for the slope, the DEM was first rescaled so the minimum elevation equaled zero. Mean elevation within 30 m blocks was subtracted from the rescaled DEM, resulting in the deviation in elevation away from the mean plane. This created a map of small-scale variations in topography that we define here as microtopography. Pixel resampling and calculations were done using the aggregate function, resample tool, and the raster calculator in the Spatial Analyst extension in ArcGIS 9.3. [14] To obtain landscape information in a form comparable to EC data, we extracted information on microtopography corresponding to each wind direction sampled by the EC tower. Virtual transects 200 m in length, originating at the EC tower and radiating out in every wind direction (0–359), were created. A distance of 200 m was chosen because it corresponded to the distance where on average >80% of scalar fluxes originated based on an analytical footprint model [Kormann and Meixner, 2001]. Microtopography (i.e., local elevation) was sampled every meter along each transect using Hawth’s Analysis Tools (H. L. Beyer, Hawth’s Analysis Tools for ArcGIS, http://www.spatialecology.com/htools/tooldesc.php, 2004) in ArcGIS 9.3. The standard deviation of microtopography, which we refer to as roughness, was calculated for each transect (wind direction). This calculated metric was chosen because it captures the variation in microtopography created by permafrost thaw (both subsided areas and raised embankments). Our metric, roughness, should not be confused with the micrometeorological term roughness length. [15] To calculate our metric, roughness, corresponding to each (half-hour) flux measurement, we simply calculated the standard deviation of the per meter values of microtopography along the entire transect corresponding to the measured wind direction. We acknowledge that C fluxes measured over a 30 min period do not emanate from a one-dimensional transect; instead they come from two-dimensional areas in the landscape. To find the best spatial metric corresponding to the measured C flux, we also calculated roughness for the three and five adjacent transects of the measured wind direction. We found no change in the relationship between C flux and roughness when using a greater number of transects. Also, in principle, footprint models provide more information than the overall radial scale of the area surrounding the EC tower because they help to pinpoint where in the landscape fluxes are originating [Kormann and Meixner, 2001; Schmid, 1997, 2002; Schmid and Lloyd, 1999]. So, for comparison, we also used estimates of the cumulative probability of fluxes coming from different fetches, calculated by a footprint model, to calculate a weighted standard deviation of roughness. However, we chose to use the simple nonweighted roughness because under certain conditions weighting caused relatively flat areas to the SE to have a higher standard deviation than the most subsided areas to the NW. We believe this discrepancy is due to the mismatch in scale between our one-dimensional transects and the two-dimensional cumulative density function calculated by the footprint model [Klijn et al., 2003; Kormann and Meixner, 2001]. [16] We explored the relationship between roughness and normalized difference vegetation index (NDVI), and the relationship between microtopography and active layer depth (ALD). NDVI was calculated using spectral data from an IKONOS image of the site acquired in June 2008. Mean NDVI was calculated for each of the 360 virtual transects radiating out from the EC tower, and was compared to the roughness of the corresponding transect. ALD was measured at 310 locations stratified at various distances within the potential EC footprint, by measuring the length of a metal probe inserted into the soil until the impenetrable frozen layer was reached. The geographic location of each site was measured and subsequently used to extract corresponding values of elevation from the map of microtopography. We only compared ALD to microtopography because we did not have a continuous surface of ALD; therefore, were unable to extract data for the 360 virtual transects to compare with roughness. Relationships were explored with generalized additive models using the mgcv package in R [R Development Core Team, 2010; Wood, 2008]. 2.4. Estimation of Landscape-Scale Carbon Exchange [17] To estimate the carbon balance of an ecosystem, measured CO$_2$ fluxes must be gap filled to generate a continuous time series of net ecosystem exchange (NEE). We estimated carbon exchange using two gap-filling strategies: (1) a novel gap-filling strategy using generalized additive models (GAMs) that are flexible enough to incorporate spatial information and (2) nonlinear (NL) relationships with nonspatial environmental variables. [18] Although NEE is directly measured by the EC technique, the driving force of the exchange is dependent on environmental conditions. Therefore, we modeled NEE for gap filling during winter, growing season (GS) days, and GS nights separately. The beginning and end of the GS was determined by abrupt changes in net radiation corresponding to snowmelt and widespread snow cover, respectively. Generally, the GS began in early May and ended at the end of September. Data during the GS were split into day and night by ambient light, so when PPFD was greater than 10 $\mu$mol m$^{-2}$ s$^{-1}$ daytime conditions were assumed. During daytime, NEE is the balance between gross primary production (GPP) and ecosystem respiration ($R_{\text{eco}}$). To tease apart their contributions, we modeled $R_{\text{eco}}$ during GS days using models fitted with GS night data and calculated GPP as the difference between NEE and $R_{\text{eco}}$ (GPP = NEE − $R_{\text{eco}}$). Once soil temperature at 5 cm fell below 0°C, winter conditions were assumed. During these conditions, photosynthesis is not occurring so NEE is equivalent to $R_{\text{eco}}$. 2.4.1. Gap-Filling Strategy 1: Generalized Additive Models [19] To generate a continuous time series, we gap filled NEE using generalized additive models (GAMs), an extension of generalized linear models where a response is modeled as the additive sum of smoothed covariate functions [Hastie and Tibshirani, 1990; Wood, 2006]. With GAMs, nonlinear effects can be modeled without manually specifying the shape of the relationships, which provided us the flexibility to incorporate roughness along with other explanatory variables into the prediction of NEE [Wood, 2006; Zuur et al., 2009]. To control the shape of functions, we used penalized regression splines, which determine the appropriate degree of smoothness of each smoothing function by generalized cross validation (GCV) and adds a “wiggliness” penalty when estimating the coefficients of each smooth with maximum likelihood [Wood, 2006]. All GAMs used had the basic form $$y_i = \beta_0 + f_1(x_i) + f_2(z_i) + f_3(x_i, z_i) + \varepsilon_i,$$ (1) where $y_i$ denotes the response variable (NEE or $R_{\text{eco}}$), $\beta_0$ is the intercept, functions $f_1(x_i)$ and $f_2(z_i)$ are smooth functions of explanatory variables $x_i$ and $z_i$, and $f_3(x_i, z_i)$ is a two-dimensional smooth function of their interactions. We used thin plate regression splines as the basis for representing smooths ($f_1$ and $f_2$) for single covariates and tensor product smooths ($f_3$) for interactions (multiple covariates) because they have been found to perform better when covariates are not on the same scale [Wood, 2006]. We forced the effective degrees of freedom in each model to count as 1.4 degrees of freedom in the GCV score, which forces the model to be slightly smoother than it might otherwise be, this is an ad hoc way to avoiding overfitting [Kim and Gu, 2004]. [20] Because eddy covariance data are heteroscedastic [Richardson et al., 2008] and there were distinct patterns in the residuals, we fit GAMs using a mixed model framework (GAMM) with a Gaussian error distribution to facilitate the incorporation of an exponential variance structure: $$\varepsilon_i \sim N(0, \sigma^2 \cdot e^{V}),$$ (2) where the variance of the residuals $\sigma^2$ is multiplied by an exponential function of the fitted values ($V$). We used all data in a single dummy group as our random effect to facilitate the incorporation of the variance structure into the GAM [Dormann, 2007; Wood, 2006; Zuur et al., 2009]. All models were fitted using the mgcv package [Wood, 2006] in R [R Development Core Team, 2010]. A subset of explanatory variables was selected *a priori* including: PPFD, temperature (air, soil at 5 cm, depth-integrated soil temperature down to 25 cm), roughness, and day of year (DOY). We suspected there might be complex interactions between explanatory variables, so both direct effects and all possible interactions were compared. Models were selected for each time period (winter, GS day, GS night) during each year (2008, 2009), by starting with the full model containing all variables and interactions and using a form of automatic backward selection in which the penalization term for each smooth could automatically set the term to zero and remove it from the model as appropriate [Wood, 2008]. We also took into consideration how removing terms affected (1) the GCV score (the lower the better), (2) the deviance explained (the higher the better), and (3) the Akaike Information Criterion (AIC, the lower the better) [Anderson et al., 1998, 2001]. Because our GAM models incorporated landscape information, they allowed us to estimate the landscapes carbon balance in two different ways. If we assumed the landscape was one unit (measurements taken from one “population” of fluxes), then gaps in the time series were filled depending on the measured wind direction at the time of the gap. This resulted in a single time series of carbon exchange for the landscape (GAM 1). Alternately, if we assumed the landscape was a combination of multiple patches (wind directions), all absorbing or releasing C simultaneously, then each wind direction was gap filled separately for the entire time series. This resulted in 360 separate time series, whose predictions were averaged to achieve an estimate of carbon exchange for the entire landscape (GAM 360). This method allowed us to estimate C exchange for the entire heterogeneous landscape and made it possible to compare predictions from landscape patches that differed in roughness. Both methods of prediction were done for each time period during 2008 and 2009. ### 2.4.2. Gap-Filling Strategy 2: Nonlinear Regressions For comparison we also gap-filled data using a more traditional nonlinear (NL) regression approach. During GS days, gaps were filled using parameters obtained by fitting half-hour NEE to PPFD using a nonrectangular hyperbola [Thornley and Johnson, 1990]: \[ \text{NEE} = \left( (\alpha \cdot \text{PPFD} \cdot P_{\text{max}}) / (\alpha \cdot \text{PPFD} + P_{\text{max}}) \right) - R, \] where $\alpha$ is the linear portion of the light response curve, PPFD is photosynthetically active radiation, $P_{\text{max}}$ is the asymptote, and $R$ is the intercept or dark respiration term. To capture changes in phenology, parameters were estimated biweekly or monthly depending on the variation among weeks. We incorporated an exponential variance structure due to the heteroscedacity of the data and used maximum likelihood to estimate parameters using the bbmle package (B. M. Bolker, bbmle: Tools for general maximum likelihood estimation, https://r-forge.r-project.org/R/?group_id=176, 2010) in R. We were unable to fit exponential models to winter and GS night data separately, so we gap filled with parameters estimated using both data together. Parameters were estimated using the following equation: \[ R_{\text{eco}} = \alpha \cdot e^{\beta \cdot T}, \] where $\alpha$ is the intercept and $\beta$ is the slope and $T$ was depth-integrated soil temperature during 2008 and soil temperature measured at 5 cm during 2009. We compared models with various forms of temperature (air, soil at 5 cm, depth-integrated soil) and chose the best model based on AIC. We choose not to model average because the best model’s AIC was much lower (>5 pts) than the alternatives. An exponential variance structure was added and parameters were estimated using generalized nonlinear least squares using the nlme package in R (J. Pinheiro et al., nlme: Linear and nonlinear mixed effects models, http://cran.r-project.org/web/packages/nlme/index.html, 2010). ### 2.4.3. Model Performance We compared the coefficient of variation ($R^2$) and Akaike Information Criterion (AIC [Anderson et al., 1998, 2001] of each GAM and NL model, during each time period. Because of the variation in model types, we calculated the $R^2$ simply as the correlation between the predicted values from each model and the observed values. To assess the predictive performance of the GAM and NL models, we performed cross validation. Ten percent of the data was randomly removed, models were fitted to the remaining data, and these models were then used to predict responses for the withdrawn ten percent. This process was repeated ten times and the root mean square error (RMSE) was calculated for each model. We then compared RMSE of the GAM and NL models within each time period using a $t$ test at a statistical significance of $p < 0.05$. To calculate comparable values of AIC we used the following equation: \[ \text{AIC} = 2 \cdot n \cdot \log(\text{RMSE}) + 2 \cdot p, \] where $n$ is the number of observations and $p$ is the number of parameters [Venables and Ripley, 2002]. ## 3. Results ### 3.1. Landscape Heterogeneity of the EC Footprint Our map of microtopography captured the spatial pattern of ground subsidence created by permafrost thaw within the EML watershed (Figure 1). The largest variation in microtopography was found to the NW of the EC tower, while areas to the E and SE were relatively flat. This spatial distribution of ground subsidence agreed well with patterns visible in high-resolution aerial photographs of the site. In addition, the maximum (0.5 m) and minimum (−0.9 m) deviations away from the mean elevation were consistent with field measurements of the depth of individual subsided features and height of raised embankments created by thaw (data not shown). This pattern was mirrored by our calculated landscape metric roughness. Transects with the highest and lowest roughness corresponded with the winds coming from the N-NW and SW-SE, respectively (Figure 1). In general, the depth of the active layer increased as local elevation decreased (became more subsided) and microtopography explained 51% (adjusted $R^2 = 0.51$) of the observed variation in active layer depth. The relationship was nonlinear, with little variation in active layer depth at sites where elevation was positive or slightly negative, followed by an exponential increase in active layer depth as elevation fell below −0.2 m (Figure 2a). Transects with more variation in microtopography (high roughness) were... found to have higher mean NDVI than transects with less variation (low roughness), and roughness explained 55% of the variation in mean NDVI (adjusted $R^2 = 0.55$, effective degrees of freedom = 6.1). This relationship was also nonlinear, with NDVI linearly increasing with roughness, then **Figure 1.** (top) Map of microtopography surrounding the eddy covariance (EC) tower (star), with lighter shades indicating areas where the ground surface is higher than the mean elevation of the landscape and darker shades indicating where the ground surface was lower than the mean elevation (i.e., subsided). (bottom) Transects numbering 360 radiating out from the EC tower (star) corresponding to the wind direction sample by the tower. The color of transects grades from light to dark as the degree of roughness increases. Note that in general, the roughest transects occur to the north and northwest of the EC tower, while transects to the south and east have lower roughness. **Figure 2.** (a) Nonlinear relationship between active layer depth (ALD; cm) and microtopography (adjusted $R^2 = 0.51$) with 95% confidence intervals. Note the small amount of variation in ALD where microtopography is positive or slightly negative, then an exponential increase in ALD as microtopography falls below $-0.2$ m. (b) Nonlinear relationship between mean normalized difference vegetation index (NDVI) and roughness (adjusted $R^2 = 0.55$) with 95% confidence intervals. NDVI linearly increases until roughness reaches 0.14, then NDVI levels out or decreases. leveling out and sometimes decreasing as roughness increased above 0.14 (Figure 2b). [28] The magnitude of net ecosystem exchange (NEE) increased as the roughness of the landscape increased (Figure 3). During GS days, NEE became more negative (more C uptake) with increasing roughness. More C was released from areas with higher roughness during GS nights, but the trend of increased C emission with increasing roughness decreased in magnitude and then reversed during the winter months (Figure 3). ### Table 1. Coefficient of Variation, Predictive Power, and Cross-Validation RMSE for Generalized Additive Model and Nonlinear Model During Growing Season Day, Growing Season Nights, and Winter 2008 and 2009 | Time Period | Model | $R^2$ | $\Delta$ AIC | RMSE | |-------------|---------|-------|--------------|------------| | 2008 | | | | | | GS-D | GAM | 0.83 | 0 | 0.067 ± 0.03<sup>b</sup> | | | NL | 0.78 | 2458.6 | 0.161 ± 0.06 | | GS-N | GAM | 0.26 | 0 | 0.132 ± 0.09 | | | NL | 0.26 | 93.4 | 0.151 ± 0.10 | | Winter | GAM | 0.16 | 0 | 0.047 ± 0.04 | | | NL | 0.01 | 116.9 | 0.053 ± 0.05 | | 2009 | | | | | | GS-D | GAM | 0.83 | 0 | 0.052 ± 0.02<sup>b</sup> | | | NL | 0.78 | 5971.8 | 0.181 ± 0.17 | | GS-N | GAM | 0.29 | 0 | 0.085 ± 0.06 | | | NL | 0.22 | 772.9 | 0.136 ± 0.09 | | Winter | GAM | 0.22 | 0 | 0.041 ± 0.03 | | | NL | 0.12 | 363.7 | 0.065 ± 0.06 | <sup>a</sup>Abbreviations are as follows: GAM, generalized additive model; GS-D, growing season day; GS-N, growing season nights; NL, nonlinear model. <sup>b</sup>Significantly different than model counterpart at p < 0.05. ### 3.2. Model Performance [29] GAM models outperformed the nonspatial NL models for gap-filling C exchange. GAM models had a higher or equivalent coefficient of determination ($R^2$) and higher predictive power (lower AIC) than NL models during every time period in both 2008 and 2009 (Table 1). During cross validation, GAM models always had lower mean RMSE, but were only significantly lower (at $p < 0.05$) during GS days of 2008 and 2009 (Table 1). ### 3.3. Predictions of Ecosystem C Balance [30] In general, the three gap-filling methods (NL, GAM 1, GAM 360) resulted in similar estimates of NEE, GPP and $R_{\text{eco}}$ for the time periods 6 June through 8 December 2008 (weeks 24–48) and 24 April through 10 October 2009 (weeks 12–40; Figure 4). Weekly estimates of NEE, GPP and $R_{\text{eco}}$ generated for both GAM methods closely mirror one another throughout both 2008 and 2009 (Figure 4). Although the NL and two GAM methods generated similar final estimates of net ecosystem exchange, there were some notable differences. The two GAM methods estimated a slightly higher uptake of carbon (more negative GPP) than their NL counterpart, 10–13 g C m$^{-2}$ more in 2008 and 12–19 g C m$^{-2}$ more in 2009. This difference in GPP was spread throughout the growing season, with no single week solely responsible for the difference (Figure 4a). Similarly, the two GAM methods estimated a slightly higher release of carbon ($R_{\text{eco}}$) than the NL method, 10–11 g C m$^{-2}$ more in 2008 and 7–13 g C m$^{-2}$ more during 2009. Unlike GPP, however, differences in $R_{\text{eco}}$ could be attributed to certain time periods. During 2008, the major differences between the predictions of the two methods occurred during the early growing season (weeks 21–29) where the NL method predicted lower $R_{\text{eco}}$ than either GAM method. During the majority of the GS of 2009 the GAM methods estimated higher $R_{\text{eco}}$ than the NL method. The other time of divergence among the methods was during transitions into and out of the growing season. The GAM methods predicted lower $R_{\text{eco}}$ than the NL method during transition from GS to winter (weeks 39–40) in 2008 and during the transition from winter to GS (week 15–19) in 2009 (Figure 4b). 3.4. Predictions of Landscape Heterogeneity of C Flux [31] Using GAMs, we were also able to predict C exchange for each wind direction. To estimate the C balance for the entire landscape we calculated the mean carbon flux of all wind directions for each 30 min interval throughout 2008 and 2009 (GAM 360). This resulted in an estimate of NEE, GPP, and $R_{\text{eco}}$ for the landscape on average but also allowed us to compare C fluxes from the wind directions with the minimum and maximum roughness to further understand the influence of permafrost thaw and ground subsidence on C flux. [32] From June to December 2008, GAM 360 estimated the landscape on average took up 337.1 g C m$^{-2}$ via photosynthesis and released 289.5 g C m$^{-2}$ via respiration, resulting in an ecosystem carbon gain of 47.5 g C m$^{-2}$ (Figure 5 and Table 2). The direction with the maximum roughness had higher GPP and $R_{\text{eco}}$ than the landscape on average, while the direction with the minimum roughness had lower GPP and $R_{\text{eco}}$ (Figure 5 and Table 2). This resulted in the direction with maximum roughness gaining 55% more C than the landscape on average, while the direction with minimum roughness gained 76.4% less C (Table 2). [33] From April to October 2009, the landscape on average took up 498.7 g C m$^{-2}$ via photosynthesis and released 410.3 g C m$^{-2}$ via respiration, resulting in a net gain of 87.8 g C m$^{-2}$ (Figure 5 and Table 2). Again, the direction with the maximum roughness had higher GPP and $R_{\text{eco}}$ than Table 2. Carbon Estimates for the Wind Direction With the Minimum Roughness, Maximum Roughness, and the Average of All 360 Wind Directions During June to December 2008 and April to October 2009a | Roughness | GPP | $R_{\text{eco}}$ | NEE | GPP | $R_{\text{eco}}$ | NEE | |-----------|-------|-----------------|-------|-------|-----------------|-------| | Minimum | −300.3| 288.3 | −11.2 | −403.6| 386.6 | −33.6 | | Maximum | −397.4| 291.9 | −106.1| −586.4| 450.6 | −149.8| | Average | −337.1| 289.5 | −47.5 | −498.7| 410.3 | −87.8 | aCarbon estimates are in g C m$^{-2}$. Negative numbers denote when the ecosystem is taking up carbon. Abbreviations are as follows: GPP, gross primary production; $R_{\text{eco}}$, ecosystem respiration; NEE, net ecosystem exchange. the landscape on average, while the direction with the minimum roughness had lower GPP and $R_{\text{eco}}$ (Figure 5 and Table 2). This resulted in the direction with maximum roughness gaining 41.4% more C than the landscape on average, while the direction with minimum roughness gained 61.7% less C (Table 2). [34] The amplified increase in GPP with maximum roughness was consistent throughout all weeks of the growing season in both 2008 and 2009 (Figure 5a). Unlike GPP, the increase in $R_{\text{eco}}$ with roughness was not consistent throughout either year. During the GS of both years, the landscape with maximum roughness had higher $R_{\text{eco}}$, but during the winter this trend reversed and the landscape with min roughness had higher $R_{\text{eco}}$ (Figure 5b). Even though areas with minimum roughness had higher $R_{\text{eco}}$ during winter, the overall carbon emission throughout 2008 and 2009 was still greater from areas with maximum roughness (Table 2). 4. Discussion 4.1. Quantifying Spatial Heterogeneity [35] Microtopography is an easy to obtain, integrative metric of the physical and biological changes occurring as the result of permafrost thaw within the EML watershed because it correlates with variables that drive C cycling. Roughness, our landscape level metric of permafrost thaw, captured the variation in microtopography of each wind direction sampled by the EC tower (Figure 1). We found that as microtopography decreased (ground became more subsided), active layer depths (ALD) increased, exponentially increasing after a threshold (Figure 2a). This pattern is a result of changes in soil thermal conductivity created by the redistribution of water into subsided areas, which increases soil temperature within depressions while decreasing temperatures in higher, dryer areas [Jorgenson et al., 2001; Kane et al., 2001; Osterkamp et al., 2009]. These physical changes in soil moisture and temperature drive variable depths of thaw across the hillslope. This landscape level pattern of ALD and subsidence is consistent with previous work at this site, which showed similar relationships between microtopography, temperature, and moisture [Lee et al., 2011; Vogel et al., 2009]. [36] Areas with greater roughness had higher mean NDVI, with NDVI increasing with roughness until a threshold was reached (Figure 2b). Unlike ALD, NDVI leveled out and slightly decreased at the upper end of the roughness scale. NDVI has been shown to be positively correlated with leaf area index [Tucker, 1979; Williams et al., 2008], aboveground biomass [Boelman et al., 2003; Sellers, 1985], net primary production [Goward et al., 1985], GPP and $R_{\text{eco}}$ [Boelman et al., 2003; La Puma et al., 2007; Vourlitis et al., 2003]. Permafrost thaw within the EML watershed causes a shift in species composition from a plant community dominated by tussock-forming sedges to a community with increased shrub and moss abundance, and concurrently an increase in biomass and productivity [Schuur et al., 2007; Vogel et al., 2009]. Our result of increased NDVI with increased roughness is consistent with this pattern of increased biomass and productivity with thaw and also indicates that an upper limit of productivity may be reached as permafrost thaw continues and plants respond to the changing conditions. This upper limit is likely driven by the size of shrub species currently at the site, but could increase in the future, on the time scale of plant succession, if boreal trees were to increase in abundance at this tundra site. [37] These relationships between microtopography and important biophysical features of the landscape (ALD and NDVI) highlight the feasibility of using remotely sensed spatial information to improve estimates of regional C balance in high-latitude ecosystems. Recent advancements in sensor resolution (e.g., LIDAR) now make microtopographical mapping of these vast, remote areas possible. 4.2. Incorporating Spatial Heterogeneity Into EC C Estimates [38] We incorporated the spatial variability of C flux into the EC estimate of C exchange during gap filling using generalized additive models (GAMs). Because a continuous time series is required to estimate C balance, missing time periods must be modeled [Baldocchi, 2003; Falge et al., 2001]. This gap-filling step provided a method for predicting C exchange based on the roughness of the landscape in specific wind directions, as well as more accurately determining the C balance of the entire landscape. We found that GAMs were equivalent or superior to traditional NL regression approaches (Table 1). GAMs had higher predictive power and a higher or equivalent coefficient of variation ($R^2$) than the NL models during all time periods. During cross validation, GAMs consistently had lower RMSE than NL models over all time periods, but were statistically lower only during GS days. The lack of statistical improvement in RMSE during GS nights and winter by the GAMs is not surprising because these time periods are notoriously difficult to model using any procedure [Baldocchi, 2003]. Overall, our model comparison to the more traditionally used NL gap-filling models gave us confidence that adding model complexity to include spatial information was justified. [39] The aggregated predictions of C exchange from the GAM and NL models did not substantially differ from one another throughout either 2008 or 2009 (Figure 4). However, there were notable time periods where the two methods diverged in their predictive capabilities. During the early growing season of 2008 (weeks 21–29), the GAM substantially overpredicted $R_{\text{eco}}$ compared to the NL model (Figure 4b). We believe this difference is due to a lack of data during the early GS, which caused the GAM to miss the upswing of $R_{\text{eco}}$ that coincides with rapid changes in phenology during the spring. Predictions of $R_{\text{eco}}$ and NEE from the NL and GAM also diverged during the transition from the GS to winter in 2008, and the transition from the winter to the GS of 2009 (Figures 4b and 4c). We attributed this sensitivity to seasonal transitions by the GAM to the flexibility of their smoothing functions, which can capture rapidly changing trends in the data [Zaur et al., 2009]. Because the NL models’ empirically derived parameter estimates depend on relationships that change dramatically in this highly seasonal ecosystem, the relationships would need to be continuously updated in order to capture these transitions [Baldocchi, 2003; Falge et al., 2001]. Biologically, the dip in $R_{\text{eco}}$ during the transition in and out of winter could be attributed to changes in microbial species composition, or to the disruption of biological activity by the state change (freezing point) of water [Mikan et al., 2002; Rivkina et al., 2000]. This dip in $R_{\text{eco}}$ could also be caused by shifts in the availability and use of substrates by microbes between the GS and winter [Davidson and Janssens, 2006; Dioumaeva et al., 2002; Hobbie et al., 2000]. ### 4.3. Effects of Spatial Heterogeneity on C Flux [40] By incorporating spatial information, we were able to estimate the C balance of the landscape in two different ways. First, we filled gaps depending on the measured wind direction at the time of the gap and created a single time series of C exchange for the entire landscape (GAM 1). Second, we gap filled the entire time series for each wind direction separately and averaged the predictions to estimate the C balance of the landscape (GAM 360). This allowed us to estimate C exchange for the entire heterogeneous landscape and also make predictions for the wind directions with the minimum and maximum roughness. Final C estimates from GAM 1 and GAM 360 were nearly identical (Figure 4). This similarity indicates that the wind distribution sampled by the EC tower was sufficient to capture the variability of thaw seen across the entire radial landscape surrounding the tower. [41] The landscape on average (GAM 360) was a C sink during both 6 month measurement campaigns. The wind direction with the most variation in microtopography (maximum roughness), resulting from permafrost thaw and ground subsidence, had both higher GPP and $R_{\text{eco}}$, than the landscape on average (Figures 5a and 5b), while the wind direction with least variation (minimum roughness) exhibited lower GPP and lower $R_{\text{eco}}$. Overall, during the 6 month campaign of 2008, the area with highest roughness gained 55.2% more C than the landscape on average, while the area with lowest roughness gained 76.4% less C. Similarly, during the 6 month campaign of 2009, the area with the highest roughness gained 41.4% more C, while the areas with lowest roughness gained 61.7% less C (Table 2). On the basis of these results, permafrost thaw and ground subsidence amplifies both GPP and $R_{\text{eco}}$. [42] The amplification of GPP with roughness was consistent throughout the GS of both 2008 and 2009 (Figures 3 and 5a). This enhanced C sequestration could be due to shifts in the plant community to more highly productive species as permafrost thaws [Osterkamp et al., 2009; Schuur et al., 2007], or to increased plant productivity due to greater nutrient availability resulting from enhanced decomposition within subsided areas [Mack et al., 2004; Shaver et al., 1992; Vogel et al., 2009; Natali et al., 2012]. Our results of higher NDVI in areas with higher roughness also support the idea of increased productivity as permafrost thaws (Figure 2b), as do several other studies that show NDVI is positively correlated with both $R_{\text{eco}}$ and GPP [Boelman et al., 2003; La Puma et al., 2007; Vourlitis et al., 2003]. [43] Ecosystem respiration also increased in areas of the landscape with greater roughness during the GS of both years (Figures 3 and 5b) because of greater temperature and moisture associated with greater ALD (Figure 2a) [Lee et al., 2011; Vogel et al., 2009]. More organic C is exposed to above freezing temperatures as ALD increases. These abiotic changes stimulate decomposition and nitrogen mineralization, which result in increased heterotrophic respiration [Shaver et al., 1992]. Vogel et al. [2009] found that as subsidence increased, ALD increased and, in conjunction, both GPP and $R_{\text{eco}}$ increased. These results are consistent with a large body of work showing that temperature and moisture are often the major determinates of organic matter decomposition and ecosystem respiration [Davidson and Janssens, 2006; Davidson et al., 1998; Hobbie et al., 2000; Oberbauer et al., 1991; Shaver et al., 1992; Xu and Qi, 2001]. There is also increased autotrophic respiration from more highly productive plants in subsided areas contributing to the overall increase in $R_{\text{eco}}$ during the GS [Schuur et al., 2007; Vogel et al., 2009]. [44] In contrast to the GS, areas with increased roughness had lower C emissions during the winter (Figures 3 and 5b). Even though areas with less roughness had higher $R_{\text{eco}}$ during winter, the overall C emission throughout 2008 and 2009 was still greater from areas with highest roughness. The reversal of the relationship of roughness and $R_{\text{eco}}$ during the winter is opposite previous work at the site that estimated more subsided areas have greater C emissions [Vogel et al., 2009]. They attributed greater winter C flux from subsided areas to warmer soils resulting from delayed active layer refreezing and the added insulation from snow accumulating in subsided areas, but data in the critical winter months was admittedly scarce [Hinkel and Hurd, 2006; Vogel et al., 2009]. The inconsistency of our data may be due to differential diffusion through variations in snow cover trapped in subsided areas. The coefficient of variation of the top GAM was also very low, only 0.16 and 0.22 during winter of 2008 and 2009, respectively. Overall, there was little variation in winter C flux (throughout space or time) and we believe more measurements are needed before our winter pattern can be fully supported. More winter data is also crucial for determining the ecosystem’s annual C balance and its feedback to climate change. ### 5. Conclusions [45] We estimated the C balance of a heterogeneous landscape undergoing permafrost thaw by incorporating spatial variation into an eddy covariance estimate. We found strong relationships between thaw induced ground subsidence and ALD and NDVI, which both correlate with C flux. These microtopographical changes also strongly correlated with NEE. By using GAMs, we incorporated these spatial relationships back into final EC C balance estimates during gap filling. Thus, we achieved a more accurate C estimate for the heterogeneous landscape and could make predictions for areas undergoing various degrees of permafrost thaw. Using GAMs, we were better able to predict C exchange during seasonal transitions, which indicates this type of gap-filling strategy would be good in systems with high temporal variability. Because all natural ecosystems vary through space and time, we believe GAMs can be an important tool for achieving more accurate C estimates. The use of GAMs will also allow EC towers to be placed in more heterogeneous environments than they have been previously used. [46] As permafrost thaws within this upland tundra ecosystem, a heterogeneous environment is created by changes in microtopography. We found this ecosystem was a C sink during 2 consecutive years, and areas with greater thaw exhibited greater C sequestration (GPP) and greater C loss ($R_{\text{eco}}$). Thawing of permafrost increases the amplitude of the C cycle, which has important implications for the future landscape-level C balance [Zimov et al., 1996]. Currently, GPP is stimulated more than $R_{\text{eco}}$, but this balance may shift because we found that NDVI diminished with increased permafrost thaw indicating there may be an upper limit in productivity, unless successional changes in vegetation occur. [47] Although we found the ecosystem was a C sink during the measurement campaigns of both years, this is not representative of the annual C balance. We did not measure C flux during a portion of the winter season, and even though C fluxes during this time period are relatively low, the length of the season make it very important. By linearly extrapolating between these missing winter periods, we found that annually the ecosystem became a C source of 60 g C m$^{-2}$ yr$^{-1}$ and 13 g C m$^{-2}$ yr$^{-1}$ in 2008 and 2009, respectively. [48] Acknowledgments. We would like to thank James T. Randerson of the University of California, Irvine, and Terry Chapin of the University of Alaska, Fairbanks, for providing us with the eddy covariance and micrometeorological equipment. For valuable field support and ideas during the initial setup of the EC tower, we thank Christian Trucco and Jason Vogel. Thanks to Forrest Stevens for support with spatial analysis and ArcGIS, Alexander Shenkin for computer and moral support, and Paulo Brando for R support. Thanks to Sasha Ivans at Campbell Scientific for EC technical support and training. We would also like to thank UNAVCO for providing GPS equipment, training, and support, which is supported by the National Science Foundation and National Aeronautics and Space Administration under NSF Cooperative Agreement EAG-0735156. This study was supported by grants to E.A.G.S.: NSF grants 0747195, 0516326, and 0620579 and the DOE NICCR program. E.F.B. was supported by the Department of Energy, Graduate Research Environmental Fellowship. References Anderson, D. R., K. P. Burnham, and G. C. White (1998), Comparison of Akaike information criterion and consistent Akaike information criterion for model selection and statistical inference from capture-recapture studies, *J. Appl. Stat.*, 25, 263–282, doi:10.1080/0266470823250. Anderson, D. R., K. P. Burnham, and G. C. White (2001), Kuiblack-Leibler information in resolving natural resource conflicts when definitive data exist, *Wildlife Soc. Bull.*, 29, 1260–1270. Anisimov, O., and F. Nelson (1996), Permafrost distribution in the Northern Hemisphere under scenarios of climatic change, *Global Planet. Change*, 14, 59–72, doi:10.1016/0921-8181(96)00002-1. Aubinet, M., et al. (1999), Estimates of the annual net carbon and water exchange of forests: The EUROFLUX methodology, *Adv. Ecol. Res.*, 30, 113–175, doi:10.1016/S0065-2504(08)60018-5. Aubinet, M., B. Heinesch, and B. Longdoz (2002), Estimation of the carbon sequestration by a heterogeneous forest: Night flux corrections, heterogeneity of the site and inter-annual variability, *Global Change Biol.*, 8, 1053–1071, doi:10.1046/j.1365-2486.2002.00529.x. Baldocchi, D. (2003), Assessing the eddy covariance technique for evaluating carbon dioxide exchange rates of ecosystems: Past, present and future, *Global Change Biol.*, 9, 479–492, doi:10.1046/j.1365-2486.2003.00629.x. Baldwin, N., M. Stiegitz, H. Ruerth, M. Sommerkorn, K. Griffin, G. Shaver, and J. Garman (2003), Response of NDVI, biomass, and ecosystem gas exchange to long-term warming and fertilization in wet sedge tundra, *Oecologia*, 135, 414–421. Burba, G. G., D. K. McDermitt, A. Grelle, D. J. Anderson, and L. K. Xu (2008), Addressing the influence of instrument surface heat exchange on the measurements of CO$_2$ flux form open-path gas analyzers, *Global Change Biol.*, 14, 1854–1876, doi:10.1111/j.1365-2486.2008.01606.x. Chapin, F., III, W. Eugster, J. McFadden, A. Lynch, and D. Walker (2000), Summer differences among arctic ecosystems in regional climate forcing, *J. Clim.*, 13, 2002–2010, doi:10.1175/1520-0442(2000)013<2002:SDAAE>2.0.CO;2. Chapin, F., III, et al. (2005), Role of land-surface changes in arctic summer, *Science*, 310, 657–660, doi:10.1126/science.1117368. Clark, K. L., H. L. Ghoul, J. B. Moncrieff, F. Cropley, and H. W. Loeschner (1999), Environmental controls over net exchanges of carbon dioxide from contrasting Florida ecosystems, *Ecol. Appl.*, 9, 936–948, doi:10.1890/1051-0761(1999)009[0936:ECONEO]2.0.CO;2. Davidson, E., and I. Janssens (2006), Temperature sensitivity of soil carbon decomposition and feedbacks to climate change, *Nature*, 440, 165–173, doi:10.1038/nature04514. Davidson, E., E. Belk, and R. Boone (1998), Soil water content and temperature as independent or confounded factors controlling soil respiration in a temperate mixed hardwood forest, *Global Change Biol.*, 4, 217–227, doi:10.1046/j.1365-2486.1998.00128.x. Dioumna, L., S. Trumbore, E. Schuur, M. Goulden, M. Litvak, and A. Hinzman (2002), Decomposition of peat from upland boreal forest: Temperature dependence and sources of respired carbon, *J. Geophys. Res.*, 107, 8222, doi:10.1029/2001JD000848, [Printed 108(D3), 2003.] Dormann, C. F. (2007), Effects of incorporating spatial autocorrelation into the analysis of species distribution data, *Global Ecol. Biogeogr.*, 16, 129–138, doi:10.1111/j.1466-8238.2006.00279.x. Falge, E., D. Baldocchi, R. Olson, P. Anthoni, M. Aubinet, C. Bernhofer, G. Burba, R. Ceulemans, R. Clement, and H. Dolman (2001), Gap filling strategies for defensible annual sums of net ecosystem exchange, *Agric. For. Meteorol.*, 107, 43–69, doi:10.1016/S0168-1923(00)00225-2. Goulden, M., J. Munger, S. Fan, B. Daube, and S. Wofsy (1996), Measurements of carbon sequestration by long-term eddy covariance: Methods and a critical evaluation of accuracy, *Global Change Biol.*, 2, 169–182, doi:10.1111/j.1365-2486.1996.tb00070.x. Goward, S., C. Tucker, and D. Dye (1985), North American vegetation patterns observed with the NOAA-7 advanced very high resolution radiometer, *Vegetation*, 64, 3–14, doi:10.1007/BF00033449. Hastie, T. J., and R. J. Tibshirani (1990), *Generalized Additive Models*, Chapman and Hall, Boca Raton, Fla. Hicks Pries, C., E. Schuur, and K. Crummer (2011), Holocene carbon stocks and carbon accumulation rates altered in soils undergoing permafrost thaw, *Ecosystems*, 15, 162–173, doi:10.1007/s10021-011-9500-4. Hinkel, K. M., and J. K. Hard (2006), Permafrost destabilization and thermokarst following snow fence installation, Barrow, Alaska, USA, *Arct. Antarct. Alp. Res.*, 38, 530–539, doi:10.1657/1523-0430(2006)38[530:PDTATF]2.0.CO;2. Hinzman, L., M. Bettez, W. Bolton, F. Chapin, M. Dyurgerov, C. Fastie, B. Griffith, R. Hollister, A. Hope, and H. Huntington (2005), Evidence and implications of recent climate change in northern Alaska and other arctic regions, *Clim. Change*, 72, 251–298, doi:10.1007/s10584-005-3532-2. Hobbie, S., J. Schimel, S. Trumbore, and J. Randerson (2000), Controls over carbon storage and turnover in high-latitude soils, *Global Change Biol.*, 6, suppl. 1, 196–210, doi:10.1046/j.1365-2486.2000.00621.x. Hollinger, D. Y., F. M. Kelliher, J. N. Byers, J. E. Hunt, T. M. McSweeney, and P. L. Weir (1994), Carbon-dioxide exchange between an undisturbed old-growth temperate forest and the atmosphere, *Ecology*, 75, 134–150, doi:10.2307/1939390. Intergovernmental Panel on Climate Change (2007), *Climate Change 2007: The Physical Science Basis: Working Group 1 Contribution to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change*, edited by S. Solomon et al., 996 pp., Cambridge Univ. Press, New York. Jorgenson, M., and T. Osterkamp (2005), Response of boreal ecosystems to varying modes of permafrost degradation, *Can. J. For. Res.*, 35, 2100–2111, doi:10.1139/x05-153. Jorgenson, M., C. Racine, J. Walters, and T. Osterkamp (2001), Permafrost degradation and ecological changes associated with a warming climate in central Alaska, *Clim. Change*, 48, 551–579, doi:10.1023/A:1005667424293. Kane, D., K. Hinkel, D. Goering, L. Hinzman, and S. Outcalt (2001), Non-conductive heat transfer associated with frozen soils, *Global Planet. Change*, 30, 275–292, doi:10.1016/S0921-8181(01)00095-9. Kim, Y. J., and C. Gu (2004), Smoothing spline Gaussian regression: More scalable computation via efficient approximation, *J. R. Stat. Soc., Ser. B*, 66, 337–356, doi:10.1046/j.1369-7412.2003.05316.x. Klijn, N., R. Kormann, N. Rotach, and F. Meixner (2003), Comparison of the Langrangian footprint model LPDM-B with an analytical footprint model, *Boundary Layer Meteorol.*, 106, 349–355, doi:10.1023/A:1021141223386. Kormann, R., and F. Meixner (2001), An analytical footprint model for non-neutral stratification, *Boundary Layer Meteorol.*, 99, 207–224, doi:10.1023/A:1018991015119. Laine, A., M. Sottocornola, G. Kiely, K. Byrne, D. Wilson, and E. Tuittila (2006), Estimating net ecosystem exchange in a patterned ecosystem: Example from blanket bog, *Agric. For. Meteorol.*, 138, 231–243, doi:10.1016/j.agrformet.2006.05.005. La Puma, I., T. Philippi, and S. Oberbauer (2007), Relating NDVI to ecosystem CO$_2$ exchange patterns in response to season length and soil warming manipulations in arctic Alaska, *Remote Sens. Environ.*, 109, 225–236, doi:10.1016/j.rse.2007.01.001. Lawrence, D., A. Slater, V. Romanovsky, and D. Nicholsky (2008), Sensitivity of a model projection of near-surface permafrost degradation to soil column depth and representation of soil organic matter, *J. Geophys. Res.*, 113, F02011, doi:10.1029/2007JF000883. Lee, H., E. Schuur, J. Vogel, M. Laviole, D. Bhadra, and C. Staudhammer (2011), A spatially explicit analysis to extrapolate carbon fluxes in tundra ecosystems, *Ecol. Appl.*, 21, 1379–1393, doi:10.1111/j.1365-2486.2011.02287.x. Liebethal, C., and T. Foken (2007), Evaluation of six parameterization approaches from the ground heat flux, *Theor. Appl. Climatol.*, 88, 43–56, doi:10.1007/s00704-005-0234-0. Liebethal, C., B. Huwe, and T. Foken (2005), Sensitivity analysis for two ground heat flux calculation approaches, *Agric. For. Meteorol.*, 132, 253–262, doi:10.1016/j.agrformet.2005.08.001. Mack, M., E. Schuur, M. Bret-Harte, G. Shaver, and F. Chapin III (2004), Ecosystem carbon storage in arctic tundra reduced by long-term nutrient fertilization, *Nature*, 431, 440–443, doi:10.1038/nature02887. Mikan, C., J. Schimel, and A. Doyle (2002), Temperature controls of microbial respiration in arctic tundra soils above and below freezing, *Soil Biol. Biochem.*, 34, 1785–1795, doi:10.1016/S0038-0717(02)00168-2. Moncrieff, J. B., J. M. Massheder, H. deBruin, J. Elbers, T. Friborg, B. Heusinkveld, P. Kabat, S. Scott, H. Soegaard, and A. Verhoef (1997), A system to measure surface fluxes of momentum, sensible heat, water vapour and carbon dioxide, *J. Hydrol.*, 188–189, 589–611, doi:10.1016/S0022-1694(96)03194-0. Natali, S. M., E. A. G. Schuur, and R. L. Rubin (2012), Increased plant productivity in Alaskan tundra with experimental warming of soil and permafrost, *Ecology*, doi:10.1111/j.1365-2745.2011.01925.x, in press. Oberbauer, S., J. Tenhunen, and J. Reynolds (1991), Environmental effects on CO$_2$ efflux from water track and tussock tundra in arctic Alaska, USA, *Arct. Alp. Res.*, 23, 162–169, doi:10.2307/1551380. Osterkamp, T., M. Johnson, E. Schuur, E. Shuur, M. Lamevskiy, J. Vogel, and Y. Tumskoy (2008), Physical and ecological changes associated with warming permafrost and thaw in central interior Alaska, *Permafrost Periglacial Processes*, 20, 235–256, doi:10.1002/ppp.556. R Development Core Team (2010), *R: A Language and Environment for Statistical Computing*, R Found. for Stat. Comput. Vienna. Reynolds, O. (1895), On the dynamical theory of incompressible viscous fluids and the determination of the criterion, *Philos. Trans. R. Soc. London, Ser. A*, 186, 123–164, doi:10.1098/rsta.1895.0004. Richardson, A., M. Mahacha, E. Falge, J. Katgge, A. Moffat, D. Papale, M. Reichstein, V. Staub, B. Braswell, and G. Churkina (2008), Statistical properties of random CO$_2$ flux measurement uncertainty inferred from model residuals, *Agric. For. Meteorol.*, 148, 38–50, doi:10.1016/j.agrformet.2007.09.001. Rivkina, E., E. Friedmann, C. McKay, and D. Gilichinsky (2000), Metabolic activity of permafrost bacteria below the freezing point, *Appl. Environ. Microbiol.*, 77, 3230–3233. Romanovsky, V., S. Gruber, A. Instanes, H. Jin, S. Marchenko, S. Smith, D. Trombotto, and K. Walter (2007), Frozen ground, in *Global Outlook for Ice and Snow*, edited by U. N. Environ. Programme, pp. 181–200, Arendal, Norway. Saito, K., M. Kimoto, T. Zhang, K. Takata, and S. Emori (2007), Evaluating a high-resolution climate model: Simulated hydrothermal regimes in frozen ground regions and their change under the global warming scenario, *J. Geophys. Res.*, 112, F02S11, doi:10.1029/2006JF000577. Schmid, H. (1997), Experimental design for flux measurements: Matching scales of observations and fluxes, *Agric. For. Meteorol.*, 87, 179–200, doi:10.1016/S0168-1923(97)00177-7. Schmid, H. (2000), Footprint modelling for vegetation atmosphere exchange studies: A review and perspective, *Agric. For. Meteorol.*, 115, 159–183, doi:10.1016/S0168-1923(02)00107-7. Schmid, H., and C. Lloyd (1999), Spatial representativeness and the location bias of flux footprints over inhomogeneous areas, *Agric. For. Meteorol.*, 93, 195–209, doi:10.1016/S0168-1923(98)00119-1. Schuur, E., K. Crummer, J. Vogel, and M. Mack (2007), Plant species composition and productivity following permafrost thaw and thermokarst in Alaskan tundra, *Ecosystems*, 10, 280–292, doi:10.1007/s10021-007-9024-0. Schuur, E., J. Bockheim, J. Canadell, E. Euskirchen, C. Field, S. Goryachkin, S. Hagemann, P. Kuhry, P. Lafleur, and H. Lee (2008), Vulnerability of permafrost carbon to climate change: Implications for the global carbon cycle, *BioScience*, 58, 701–714, doi:10.1641/B580807. Schuur, E., J. Vogel, K. Crummer, H. Lee, J. Sickman, and T. Osterkamp (2009), The effect of permafrost thaw on old carbon release and net carbon exchange from tundra, *Nature*, 459, 556–559, doi:10.1038/nature08031. Sellers, P. (1985), Canopy reflectance, photosynthesis and transpiration. Part II. The role of biophysics in the linearity of their interdependence, *Int. J. Remote Sens.*, 6, 1335–1372, doi:10.1080/01431168508948283. Shaver, G., W. Billings, F. Chapin III, A. Giblin, K. Nadelhoffer, W. Oechel, and E. Rastetter (1992), Global change and the carbon balance of arctic ecosystems, *BioScience*, 42, 433–441, doi:10.2307/1311862. Soil Survey Staff (1999), *Soil Taxonomy: A Basic System of Soil Classification for Making and Interpreting Soil Surveys*, U.S. Govt. Print. Off., Washington, D. C. Tarnocai, C., J. Canadell, G. Mazhitova, E. Schuur, P. Kuhry, and S. Zimov (2009), Soil organic carbon pools in the northern circumpolar permafrost region, *Global Biogeochem. Cycles*, 23, GB2023, doi:10.1029/2008GB003327. Thornley, J. H., and I. R. Johnson (1990), *Plant and Crop Modeling: A Mathematical Approach to Plant and Crop Physiology*, Chardon, Oxford, U. K. Tucker, C. (1979), Red and photographic infrared linear combinations for monitoring vegetation, *Remote Sens. Environ.*, 8, 127–150, doi:10.1016/0034-4257(79)90013-0. Venables, W., and B. Ripley (2002), *Modern Applied Statistics With S*, 4th ed., Springer, New York. Vogel, J., E. Schuur, C. Trucco, and H. Lee (2009), Response of CO$_2$ exchange in a tussock tundra ecosystem to permafrost thaw and thermokarst development, *J. Geophys. Res.*, 114, G04018, doi:10.1029/2008JG000901. Vourlitis, G., J. Vertaillie, W. Oechel, A. Hope, D. Stow, and R. Engstrom (2003), Spatial variation in regional CO$_2$ exchange for the Kuparuk River Basin, Alaska over the summer growing season, *Global Change Biol.*, 9, 930–941, doi:10.1046/j.1365-2486.2003.00639.x. Webb, E. K., G. I. Pearman, and R. Leuning (1980), Correction of flux measurements for density effects due to heat and water-vapor transfer, *Q. J. R. Meteorol. Soc.*, 106, 85–100, doi:10.1002/qj.49710644707. Wilczak, J., S. Oncley, and S. Stage (2001), Sonic anemometer tilt correction algorithms, *Boundary Layer Meteorol.*, 99, 127–150, doi:10.1023/A:1018962604463. Williams, M., R. Bell, L. Spadavecchia, E. Street, and M. Van Wijk (2008), Upscaling leaf area index in an arctic landscape through multiscale observations, *Global Change Biol.*, 14, 1517–1530, doi:10.1111/j.1365-2486.2008.01590.x. Wood, S. (2006), *Generalized Additive Models: An Introduction With R*, Chapman and Hall, Boca Raton, Fla. Wood, S. N. (2008), Fast stable direct fitting and smoothness selection for generalized additive models, *J. R. Stat. Soc., Ser. B*, 70, 495–518, doi:10.1111/j.1467-9868.2007.00646.x. Xu, M., and Y. Qi (2001), Soil-surface CO$_2$ efflux and its spatial and temporal variations in a young ponderosa pine plantation in northern California, *Global Change Biol.*, 7, 667–677, doi:10.1046/j.1354-1011.2001.00435.x. Yocum, L. C., G. W. Adema, and C. K. Hults (2006), *A Baseline Study of Permafrost in the Totlat Basin*, Denali Natl. Park and Preserve, Denali Park, Alaska. Zhang, T., R. Barry, K. Knowles, J. Heginbottom, and J. Brown (1999), Statistics and characteristics of permafrost and ground-ice distribution in the Northern Hemisphere, *Polar Geogr.*, 23, 132–154, doi:10.1080/10889379909377670. Zhang, T., O. Frauenfeld, M. Serreze, A. Etringer, C. Oelke, J. McCreight, R. Barry, D. Gilichinsky, D. Yang, and H. Ye (2005), Spatial and temporal variability in active layer thickness over the Russian arctic drainage basin, *J. Geophys. Res.*, 110, D16101, doi:10.1029/2004JD005642. Zimov, S., S. Davidov, Y. Voropayev, S. Prosiashnikov, I. Semiletov, M. Chapin, and F. Chapin (1996), Siberian CO$_2$ efflux in winter as a CO$_2$ source and cause of seasonality in atmospheric CO$_2$, *Clim. Change*, 33, 111–120, doi:10.1007/BF00140516. Zuur, A., E. Ieno, N. Walker, A. Saveliev, and G. Smith (2009), *Mixed Effects Models and Extensions in Ecology With R*, Springer, New York. E. F. Belshe, R. Bracho, and E. A. G. Schuur, Department of Biology, University of Florida, 220 Bartram-Carr Hall, PO Box 118525, Gainesville, FL 32611, USA. (email@example.com) B. M. Bolker, Department of Mathematics and Statistics, McMaster University, 314 Hamilton Hall, 1280 Main St. W., Hamilton, ON L8S 4K1, Canada.
FAST AND STABLE UNITARY QR ALGORITHM* JARED L. AURENTZ†, THOMAS MACH‡, RAF VANDEBRIL§, AND DAVID S. WATKINS¶ Abstract. A fast Fortran implementation of a variant of Gragg’s unitary Hessenberg QR algorithm is presented. It is proved, moreover, that all QR- and QZ-like algorithms for the unitary eigenvalue problems are equivalent. The algorithm is backward stable. Numerical experiments are presented that confirm the backward stability and compare the speed and accuracy of this algorithm with other methods. Key words. eigenvalue, unitary matrix, Francis’s QR algorithm, core transformations rotators AMS subject classifications. 65F15, 65H17, 15A18, 15B10 1. Introduction. This project began with a request from Alan Edelman [14] for fast code to compute the eigenvalues of a unitary matrix. We were able to fulfill this request by taking our unitary-plus-rank-one code [4] and removing the hard parts. At the same time we searched the web to find out what is already publicly available in Fortran or some other compiled language. To our surprise, even though many papers on the unitary eigenvalue problem have been written in the past thirty years (see Section 3), we found only one item, the divide-and-conquer code of Ammar, Reichel, and Sorensen [2]. Because of this surprising shortage of publicly available software, we decided to publish our codes. We use Francis’s implicitly shifted QR algorithm, which is the most popular method for solving dense medium-sized eigenvalue problems. A unitary upper Hessenberg matrix can be described by $O(n)$ parameters. Using this representation of the matrix, one can implement the algorithm in a complexity of $O(n^2)$ instead of $O(n^3)$. This was first done by Gragg [21] in 1986, and since then a number of improvements and variants have been suggested. For example, instead of operating on the upper Hessenberg form, one can operate on the CMV form or some other twisted factorization [35, 36], or one can transform the matrix to an odd-even pencil and apply a QZ algorithm to the pencil [5]. In this paper we prove that all of these variants are equivalent, that is, they are just different ways of looking at the same algorithm. We conclude the paper with numerical experiments that demonstrate the speed and accuracy of our code. We compare against standard LAPACK 3.5.0 codes, which are $O(n^3)$, and the divide-and-conquer code [2]. Our code is much faster than the LAPACK codes and just as accurate. For most of the classes of test problems that we considered, the divide-and-conquer code is faster than ours, but ours is much more accurate. *Received November 18, 2014. Accepted May 21, 2015. Published online on June 17, 2015. Recommended by C. Jagels. The research was partially supported by the Research Council KU Leuven, projects CREA-13-012 Can Unconventional Eigenvalue Algorithms Supersede the State of the Art, OT/11/055 Spectral Properties of Perturbed Normal Matrices and their Applications, CoE EF/05/006 Optimization in Engineering (OPTEC), and fellowship F+13/020 Exploiting Unconventional QR Algorithms for Fast and Accurate Computations of Roots of Polynomials by the Fund for Scientific Research–Flanders (Belgium) project G034212N Reestablishing Smoothness for Matrix Manifold Optimization via Resolution of Singularities by the Interuniversity Attraction Poles Programme, initiated by the Belgian State, Science Policy Office, Belgian Network DYSCO (Dynamical Systems, Control, and Optimization) and by the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement no. 291068. The views expressed in this article are not those of the ERC or the European Commission, and the European Union is not liable for any use that may be made of the information contained here. †Mathematical Institute, University of Oxford, Andrew Wiles Building, Woodstock Road, OX2 6GG Oxford, UK (email@example.com). ‡Department of Mathematics, Washington State University, Neill Hall, Pullman, WA 99164-3113, USA (firstname.lastname@example.org). §Department of Computer Science, KU Leuven, Celestijnenlaan 200A, 3001 Leuven (Heverlee), Belgium ({thomas.mach,email@example.com). The paper is organized as follows. Section 2 lists a few applications of the unitary eigenvalue problem, and Section 3 provides an overview of previous work. Section 4 introduces the parametrization and establishes the notation and terminology we will use. In Section 5 we describe our version of the algorithm. In Section 6 we look at some of the variants and prove that they are all equivalent. In Section 7 we present our numerical experiments. The software associated with this paper has been packaged into a Fortran 90 library called eiscor and can be found at https://github.com/eiscor/eiscor. 2. Some applications. Killip and Nenciu [26] describe an ensemble of $n \times n$ random unitary matrices whose eigenvalues are distributed according to the Gibbs distribution for $n$ particles of the Coulomb gas on the unit circle. Edelman [14] wanted code to use for tests involving large numbers of random matrices with large $n$ from this ensemble. Gauss–Szegő quadrature formulas [20, 22, 39] are formulas of maximal degree for estimating integrals with respect to measures with support on the unit circle. The sample points are the eigenvalues of a certain unitary matrix, and the weights are the squares of the absolute values of the first components of the eigenvectors. Pisarenko frequency estimates [30] can be computed by solving a unitary eigenvalue problem as described by Cybenko [9]. 3. Overview. A unitary Hessenberg matrix $A$ is efficiently stored as a product of $n$ unitary factors $$A = C_1 \cdots C_{n-1} C_n,$$ where each $C_i$ differs from the identity matrix only in the rows/columns $i$ and $i+1$. Gragg [21] developed the first unitary Hessenberg QR algorithm, updating the factors directly in each QR step. Convergence had been proved earlier by Eberlein and Huang [13]. Wang and Gragg [37, 38] analyzed the relationship between convergence and shift choice. M. Stewart [32] proved the original algorithm to be numerically unstable. Improvements and a proof of stability are due to Gragg [23] and Stewart [33]. David and Watkins [10] presented a multishift version. Ammar, Gragg, and Reichel [1] presented an approach for orthogonal matrices that computes the singular values of the matrix’s real and imaginary parts in order to get eigenvalues with accurate real and imaginary parts. Rutishauser [15] deduced a method relying on the LU decomposition of the orthogonal matrix. An approach that applies a QZ algorithm to a unitary pencil was described by Bunse-Gerstner and Elsner [5], and convergence as a particular case of a more general setting was proved later on by Vandeburil and Watkins [35]. A bisection method, relying on a Sturm sequence, was proposed by Bunse-Gerstner and He [6]. Divide-and-conquer approaches were developed originally by Gragg and Reichel [24] and Ammar, Reichel, and Sorensen [2, 3], and later by Gu et al. [25]. Unitary matrices, their rank properties, and eigenvalue methods were analyzed in great generality by Delvaux and Van Barel [11, 12]. An alternative interpretation of Gragg’s algorithm [21] linking the transmission of shifts to the updating of the matrix factorization is included. The algorithm presented by Gemignani [18] differs significantly from the other methods. A Möbius transformation is used to convert the unitary matrix to Hermitian diagonal-plus-semiseparable form, after which the eigenvalues of this matrix are computed. 4. Unitary matrix factorization and properties. We assume the matrix $A \in \mathbb{C}^{n \times n}$ to be of unitary Hessenberg form already [11, 19, 34], thus $A_{i,j} = 0$ for all $i > j + 1$. Each unitary Hessenberg matrix can be factored into a product of $n$ unitary matrices $A = C_1 C_2 \cdots C_{n-1} C_n$, where each $C_i$ is equal to the identity except in the $2 \times 2$ submatrix $(i : i+1, i : i+1)$. Thus $C_n$ differs from the identity only in the $(n, n)$ position. Optionally we can absorb $C_n$ into $C_{n-1}$ so that the product has only $n - 1$ factors: $A = C_1 C_2 \cdots C_{n-1}$. We call the matrices $C_i$ core transformations. The subscript $i$ refers to the position of the active part, and it follows that core transformations $C_i$ and $C_j$ commute whenever $|i - j| > 1$. We will use a pictorial description to assist in the understanding of the algorithm. A core transformation will be represented as $\uparrow_i$ with the tiny arrows pinpointing the active part. Our unitary Hessenberg matrix $A$ is now factored as a descending sequence of core transformations ($n = 8$) $$A = C_1 C_2 \cdots C_{n-1} = \uparrow_1 \uparrow_2 \uparrow_3 \uparrow_4 \uparrow_5 \uparrow_6 \uparrow_7 \uparrow_8.$$ There are two operations, the turnover and fusion, required to describe the algorithm. The fusion stands for uniting two core transformations acting on the same rows by forming their product. A $3 \times 3$ unitary matrix can always be factored in two different ways as the product of three core transformations; by computing the QL and QR decomposition, we obtain the following equalities $$U_1 V_2 W_1 = \uparrow_1 \uparrow_2 \uparrow_3 = \begin{bmatrix} x & x & x \\ x & x & x \\ x & x & x \end{bmatrix} = \uparrow_1 \uparrow_2 \uparrow_3 = U_2 V_1 W_2.$$ A turnover is the transition from left to right, or vice versa, thereby changing the active parts of the core transformations without changing their product. We can use a turnover to pass a core transformation through a descending sequence as follows: $$\uparrow_1 \uparrow_2 \uparrow_3 \uparrow_4 \uparrow_5 = \uparrow_1 \uparrow_2 \uparrow_3 \uparrow_4 \uparrow_5 = \uparrow_1 \uparrow_2 \uparrow_3 \uparrow_4 \uparrow_5 = \uparrow_1 \uparrow_2 \uparrow_3 \uparrow_4 \uparrow_5.$$ For this operation we will use the shorthand $$\uparrow_1 \uparrow_2 \uparrow_3 \leftarrow \uparrow_4 \uparrow_5.$$ The arrow indicates that a core transformation disappears from the right (the core transformation is in light orange), and a new one appears on the left. Exactly two of the core transformations from the descending sequence participate in the turnover. We will also use an analogous operation $$\uparrow_1 \uparrow_2 \uparrow_3 \rightarrow \uparrow_4 \uparrow_5.$$ which passes a core transformation from left to right through an ascending sequence. 5. Unitary QR algorithm. We use Francis’s implicitly shifted QR algorithm [16, 17]. Each iteration consists of three steps: choose a suitable shift to enhance the convergence, apply a similarity transformation to create a bulge, and chase the bulge until it disappears from the bottom of the matrix. Repeated iterations lead to deflations, and eventually we find all of the eigenvalues as explained in numerous textbooks, e.g., [40]. Choice of shifts. The shift $\mu$ is used to accelerate convergence. A good shift is one that approximates an eigenvalue well. Since all eigenvalues of $A$ have absolute value 1, it seems reasonable to choose a shift that lies on the unit circle. We use a projected or unimodular Wilkinson shift. Therefore we pick the eigenvalue $\hat{\mu}$ of the trailing $2 \times 2$ submatrix $A_{k-1:k,k-1:k}$ that is closer to $A_{k,k}$, with $k$ the largest index of the undeflated part of $A$. The shift $\mu$ is the projection $\hat{\mu} / |\hat{\mu}|$ of $\hat{\mu}$ on the unit circle. The projected or unimodular Wilkinson shift strategy was investigated by Wang and Gragg in [38]. They proved global convergence and showed by numerical experiments that the projected Wilkinson shift is superior to the standard Wilkinson shift without projection and to the shift given in [13]. Initial similarity transformation: bulge generation. The initial similarity transformation introduces the bulge perturbing the Hessenberg structure. For a suitable shift $\mu$, let $x = (A - \mu I)e_1$, with $e_1$ the first standard basis vector. As $A$ is Hessenberg, $x$ has only two nonzero elements. The core transformation $B_1$ acting on rows one and two such that $B_1^H x = e_1$ determines the first similarity transformation $B_1^H A B_1$. Pictorially we get $$B_1^H A B_1 = B_1^H C_1 C_2 \cdots C_{n-1} B_1 = \underbrace{A = C_1 \cdots C_{n-1}}_{\text{A}} = \begin{bmatrix} & & & & \\ & & & & \\ & & & & \\ & & & & \\ & & & & \end{bmatrix},$$ where we fused the two leftmost core transformations $B_1^H C_1 = \tilde{C}_1$, and only $B_1$, the bulge, disturbs the descending sequence of core transformations. Bulge chasing. Chasing the bulge consists of performing two operations repeatedly until the bottom of the matrix is reached: first execute a turnover moving the bulge to the left and second do a similarity transformation bringing the bulge back to the right. Pictorially the first two chasing steps look as follows; the black arrows indicate the bulge’s motion. emphasize that this part of the code is not at all novel and certainly not fast. It computes the eigenvectors by forming the product of all similarity transformations applied to $A$. Thus $\mathcal{O}(n^2)$ rotations are multiplied together, which costs $\mathcal{O}(n^3)$ flops. We use the eigenvectors for the residual computations in the numerical experiments in the next section. These serve as a check on the backward stability and accuracy of the eigenvalue computations. Our eigenvector routines would be useful to anyone who wants a complete set of eigenvectors that is orthonormal to working precision and is not concerned with the high computational cost. If just a few eigenvectors are wanted or numerical orthogonality is not a concern, then inverse iteration is superior. We may provide an $\mathcal{O}(n^2)$ eigenvector routine in the future. **Gauss-Szegő quadrature formulas.** Our codes can be used to compute Gauss-Szegő quadrature formulas [20, 22, 39] for estimating integrals with respect to measures on the unit circle. The sample points are the eigenvalues of a certain unitary matrix, and the weights are the squares of the absolute values of the first components of the eigenvectors. Thus the weights can be obtained by accumulating just the first row of the eigenvector matrix, and this can be done in $O(n^2)$ time. In practice the weight computation increases the computing time by some 60% to 70%. ### 6. Twisted QR algorithms and pencil methods In Section 4, the unitary matrix $A$ was assumed to be of Hessenberg form. By executing some similarity transformations on the factored form, one can easily realize any factorization $U^H A U = C_{p_1} C_{p_2} \cdots C_{p_{n-1}}$, where $p$ is a permutation of $[1, 2, \ldots, n - 1]$, thereby not changing the core transformations only their relative positions. The core transformations are said to form a *twisted pattern*. Direct similarity transformations to any twisted shape also exist [34], and the most well-known twisted ordering is likely the CMV ordering (assume $n$ even) $C_1 C_3 C_5 \cdots C_{n-1} \cdot C_2 C_4 C_6 \cdots C_{n-2}$ [7, 8, 31, 39]. Eigenvalues of such a twisted pattern can be obtained via *twisted* QR algorithms, where the convergence is determined by rational functions instead of polynomials as in the classical QR case [34, 36]. In general these are distinct algorithms, but in the unitary case they all collapse to the same thing: there is no benefit of considering a particular pattern; all (twisted) QR algorithms are independent of $p$ and provide identical outcomes as we will prove in this section. #### 6.1. The unitary twisted QR algorithm The shift computation and deflation procedures are the same as in the Hessenberg case, so we discuss only the initial similarity transformation and the bulge chase. In [36] we developed twisted QR steps of arbitrary degree. Here we will restrict our attention to single (degree-one) steps and rely on the fact that one step of degree $m$ is equivalent to $m$ steps of degree one. **Initial similarity transformation.** The form of the initial similarity transformation is determined by the relative positions of $C_1$ and $C_2$. If $C_1$ is positioned to the left of $C_2$, the initial core transformation is computed as in the Hessenberg case, that is, $B_1^H$ transforms the second entry of $x = (A - \mu I)e_1 = (C_1 - \mu I)e_1$ to zero. If $C_1$ is located to the right of $C_2$, we need to zero the second element of $x = (I - \mu A^{-1})e_1 = (I - \mu A^H)e_1 = (I - \mu C_1^H)e_1$ (see [36] for more details). One easily checks that the core transformation $B_1^H C_1$ will do the job. In either case we get the same outcome. In the first case, $C_1 C_2$ is transformed to $B_1^H C_1 C_2 B_1$, while in the second case, $C_2 C_1$ is transformed to $B_1^H C_1 C_2 B_1$. After a fusion this becomes a product of three core transformations in a vee shaped configuration. We emphasize that the two different ways of initializing the iteration result in exactly the same three core transformations. The first step of the bulge chase will be to turn them over. **Bulge chasing.** The bulge chase consists again of repeatedly applying a turnover followed by a similarity. But now the bulge can show up on either the left or the right. The bulge will be the core transformation $B_{j+1}$ that is not sandwiched between two core transformations $\hat{C}_j$ and $C_{j+2}$, where $\hat{C}_j$ results from the turnover and $C_{j+2}$ is part of the original factorization. As a consequence one can bring, via unitary similarity, the bulge to the other side, either right or left. The flow of the algorithm is illustrated below for $n = 8$ on the pattern $C_7C_6C_5C_3C_4C_2C_1$. Pictorially this product looks like $$A = \begin{bmatrix} & & & & & & & \\ & & & & & & & \\ & & & & & & & \\ & & & & & & & \\ & & & & & & & \\ & & & & & & & \\ & & & & & & & \\ & & & & & & & \end{bmatrix}.$$ An initial similarity and fusion are performed. Then a turnover is executed creating a bulge on the right. A unitary similarity transformation moves the bulge back to the left. The next turnover creates a bulge on the left. After initial similarity Turnover Similarity Turnover Another similarity transformation brings the bulge back to the right and the process is continued. Similarity Turnover Similarity Comparing the pattern of the intermediate factorizations with the original pattern, one can observe that the pattern moves upward [34]. Once we get to the bottom, we have the freedom to position the final core transformation to the left or to the right of the current pattern. In the general matrix setting, this choice matters and can have a significant impact on the convergence rate [36]. However, in the unitary case it makes no difference as we now show. **6.2. Equivalence of twisted QR steps.** Suppose the result of a QR step on the factored Hessenberg matrix equals $\hat{C}_1 \hat{C}_2 \cdots \hat{C}_{n-1}$. Then we will prove that the result of a twisted QR step executed on an arbitrary pattern equals $\hat{C}_{p_1} \hat{C}_{p_2} \cdots \hat{C}_{p_{n-1}}$, that is, the factors will be identical. Only the pattern could differ. This proves that in the unitary case, the pattern is of no importance: any (twisted) QR step, with identical shift, always results in identical core transformations. We will prove by induction that the turnover executed in each step always involves the same three core transformations no matter what pattern of core transformations was considered originally. To initiate the induction, recall the discussion of the initial similarity transformation. There we noted that after the first fusion, the three core transformations at the top of the configuration are always the same regardless of the original ordering. These are the core transformations that participate in the first turnover operation. Now we do the induction step. Studying our example above, we find that at each step we have one of two configurations, either \[ \begin{bmatrix} \uparrow a \\ \downarrow b \\ \downarrow c \\ \downarrow d \end{bmatrix} \quad \text{or} \quad \begin{bmatrix} \uparrow a \\ \downarrow d \\ \downarrow b \\ \downarrow c \end{bmatrix}. \] In either case we do the turnover \[ \begin{bmatrix} \uparrow a \\ \downarrow b \\ \downarrow c \end{bmatrix} \quad \Rightarrow \quad \begin{bmatrix} \uparrow x \\ \downarrow y \\ \downarrow z \end{bmatrix} \] resulting in \[ \begin{bmatrix} \uparrow x \\ \downarrow y \\ \downarrow z \\ \downarrow d \end{bmatrix} \quad \text{or} \quad \begin{bmatrix} \uparrow d \\ \downarrow x \\ \downarrow y \\ \downarrow z \end{bmatrix}. \] In the left case we move $\uparrow x$ from the left to the right, and in the right case, we move $\uparrow z$ from right to left, resulting in \[ \begin{bmatrix} \uparrow y \\ \downarrow z \\ \downarrow d \\ \downarrow x \end{bmatrix} \quad \text{or} \quad \begin{bmatrix} \uparrow z \\ \downarrow d \\ \downarrow x \\ \downarrow y \end{bmatrix}. \] In both cases the next turnover will involve \[ \begin{bmatrix} \uparrow z \\ \downarrow d \\ \downarrow x \end{bmatrix}. \] We note also that the transformation $\uparrow y$ will not participate in any subsequent turnovers in this iteration. It will become one of the core transformations in the representation of the next iterate, and it is the same in both cases. This completes the induction. After the final turnover and similarity, a single fusion concludes the chasing step. It is easy to check that the flexibility allowing to execute the fusion left or right has no effect on the contents of the final bottom core transformation. 6.3. QZ methods on unitary pencils. Let $A = C_1 \cdots C_{n-1}$ be a factorization into core transformations of a unitary Hessenberg matrix. The eigenvalues of $A$ can be computed as eigenvalues from any pencil $(U, V^H)$, where $U$ is a unitary matrix constructed from some core transformations $C_i$ and $V$ by the remaining core transformations in any ordering [35]. For any such pencil there is a twisted QZ algorithm to compute the eigenvalues. In the unitary case, the twisted QZ algorithm on $(U, V^H)$ is identical to a twisted QR algorithm on the product $UV$. Since all twisted QR methods are identical, we can conclude that all twisted QZ algorithms are essentially the same and no different from the Hessenberg QR algorithm in the unitary case. The algorithm of Bunse-Gerstner and Elsner [5] falls into this category. 7. Numerical experiments. The computations were executed on an Intel Core i5-3570 CPU running at 3.40 GHz with 8 GB of memory. GFortran 4.6.3 was used to compile the Fortran codes. We compared our codes with LAPACK 3.5.0’s ZHSEQR and DHSEQR codes and Ammar, Reichel, and Sorensen’s unitary divide-and-conquer (D&C) code [2]. Our algorithm is backward stable. Therefore it should always produce results that have a backward error that is a modest multiple of the machine precision. Since the eigenvalues of a unitary matrix are perfectly conditioned, such a tiny backward error guarantees that the computed eigenvalues are accurate to (nearly) machine precision. The following experiments verify these expectations. For most of our test matrices the exact eigenvalues are not known. Therefore we use the maximum residual $$\max_i \|Av_i - \lambda_i v_i\|_2,$$ with $(\lambda_i, v_i)$ the computed eigenpairs, as our measure of accuracy. This is a backward error, and if it is small, it guarantees an equally small error in the eigenvalues. In one example, Example 7.5, the eigenvalues are known exactly. In that example, and only in that example, we used the maximum error in the computed eigenvalues as our measure of accuracy. In the examples below, the times shown are for computing eigenvalues only. We also computed eigenvectors in order to compute the residuals (7.1) and check our backward stability claim. The unitary divide-and-conquer code also has the capability of computing eigenvectors. That code is fast, much faster than ours if also eigenvectors are computed, but it is also much less accurate. We test both the complex single shift code and the real double shift code. For the double shift code we make the examples real. Since divide-and-conquer is only available in a complex version, we use that version in both the real and complex tests. Example 7.1. This example was taken from Nguyen, Nguyen, and Vu [29, Figure 1]. The rotators in the unitary Hessenberg’s factorization are determined by fixed numbers $m$ and $p$ for all rotators as $$C_k(k : k+1, k) = \begin{bmatrix} (-1)^{k-1} p \sqrt{1 - m^{\frac{2}{\beta k}}} \\ \sqrt{m^{\frac{2}{\beta k}}} \end{bmatrix} \quad \text{and} \quad C_n(n, n) = (-1)^{n-1},$$ where $m$ is drawn from a uniform distribution in $(0, 1)$ and $p$ is taken randomly (uniformly distributed) on the unit circle. Runtime and accuracy are displayed in Figure 7.1 (single shift code) and in Figure 7.4 (double shift code). Our code is about three times faster than D&C and many times faster than LAPACK at large dimensions. Our accuracy is comparable to that of LAPACK and much better than that of D&C. **Example 7.2 (Type I).** This example appeared in Ammar et al. [2], Gu et al. [25], and Gemignani [18, Type I]. The rotations are defined for varying $m$ and $p$ drawn from a uniform distribution in $(0, 1)$ and on the unit circle, respectively, $$C_k(k : k + 1, k) = \begin{bmatrix} (-1)^{k-1} mp \\ \sqrt{1 - m^2} \end{bmatrix} \quad \text{and} \quad C_n(n, n) = 1.$$ (7.2) The results are presented in Figure 7.2 (single shift code) and in Figure 7.5 (double shift code). On this class of problems, D&C is fastest for large problems and exhibits better than $O(n^2)$ performance. However, it is not very accurate. The same trend is also observed in the next two examples. **Example 7.3 (Type II).** This example originates from Gemignani [18, Type II]. The rotations are real, and the angle $\theta$ is drawn from a uniform distribution in $(0, \pi)$ $$C_k(k : k + 1, k) = \begin{bmatrix} \cos(\theta) \\ \sin(\theta) \end{bmatrix} \quad \text{and} \quad C_n(n, n) = 1.$$ In addition all rotations are normalized to ensure that the matrix is unitary. By choosing $n$ odd and $C_n$ the identity, one eigenvalue is forced to be $-1$. The results are qualitatively similar to those of Example 7.2. We therefore choose not to display them. **Example 7.4 (Type III).** This example can be found in Gu et al. [25, Type III] and Gemignani [18, Type III] and is a block version of type I. All four blocks use the same matrix of type I. These blocks are then coupled by a rotation (7.2) with small sine; we choose $m = 0.001$ and $p$ randomly on the unit circle. The results are qualitatively similar to those of Example 7.2. We therefore choose not to display them. In this class of problems we encountered one large real example for which our code failed. It hit the iteration limit before it was able to resolve a cluster of sixteen eigenvalues within $10^{-15}$ of $-1$. **Example 7.5 (Known eigenvalues).** The unitary Hessenberg matrices are obtained from solving an inverse eigenvalue problem with the software from Mach, Van Barel, and Vandebriil [27], where the eigenvalues are uniformly distributed on the unit circle. Our measure of error is the forward error, i.e., the maximum error in the computed eigenvalues. The results are shown in Figure 7.3 (single shift code) and in Figure 7.6 (double shift code). In the complex case this method of matrix generation is not sufficiently accurate for matrices of dimension larger than 100, so only small dimensions are shown. **8. Conclusions.** A fast and backward stable implementation of the implicitly shifted QR algorithm for computing eigenvalues of unitary matrices was presented. Moreover, we proved that all fast QR- and QZ-like methods for solving the unitary eigenvalue problem are essentially the same. **Acknowledgment.** We thank Alan Edelman, Massachusetts Institute of Technology, for requesting a fast unitary eigenvalue solver. We also thank Nick Trefethen, University of Oxford, and the referees for their constructive remarks, which helped us to improve the paper. Fig. 7.1. Runtime and accuracy for Example 7.1, $\beta = 2$. Fig. 7.2. Runtime and accuracy for Example 7.2. Fig. 7.3. Runtime and accuracy for Example 7.5. Fig. 7.4. Runtime and accuracy for Example 7.1, $\beta = 2$, double shift code. Fig. 7.5. Runtime and accuracy for Example 7.2, double shift code. Fig. 7.6. Runtime and accuracy for Example 7.5, double shift code. REFERENCES [1] G. S. Ammar, W. B. Gragg, and L. Reichel, *On the eigenproblem for orthogonal matrices*, in Proceedings of the 25th IEEE Conference on Decision & Control, IEEE Conference Proceedings, Los Alamitos, 1986, pp. 1963–1966. [2] G. S. Ammar, L. Reichel, and D. C. Sorensen, *An implementation of a divide and conquer algorithm for the unitary eigenproblem*, ACM Trans. Math. Software, 18 (1992), pp. 292–307. [3] ———, *Corrigendum: Algorithm 730: An implementation of a divide and conquer algorithm for the unitary eigenproblem*, ACM Trans. Math. Software, 20 (1994), p. 161. [4] J. L. Aurentz, T. Mach, R. Vandebrouil, and D. S. Watkins, *Fast and backward stable computation of roots of polynomials*, SIAM J. Matrix Anal. Appl., to appear, 2015. [5] A. Bunse-Gerstner and L. Elsner, *Schur parameter pencils for the solution of the unitary eigenproblem*, Linear Algebra Appl., 154/156 (1991), pp. 741–778. [6] A. Bunse-Gerstner and C. He, *On a Sturm sequence of polynomials for unitary Hessenberg matrices*, SIAM J. Matrix Anal. Appl., 16 (1995), pp. 1043–1055. [7] M. J. Cantero, L. Moral, and L. Velazquez, *Five-diagonal matrices and zeros of orthogonal polynomials on the unit circle*, Linear Algebra Appl., 362 (2003), pp. 29–56. [8] R. Cruz-Barroso and S. Delvaux, *Orthogonal Laurent polynomials on the unit circle and snake-shaped matrix factorizations*, J. Approx. Theory, 161 (2009), pp. 65–87. [9] G. Cybenko, *Computing Pisarenko frequency estimates*, in Proceedings of the 1984 Conference on Information Systems and Sciences, Princeton University, Princeton, 1985, pp. 587–591. [10] R. J. A. David and D. S. Watkins, *Efficient implementation of the multishift QR algorithm for the unitary eigenvalue problem*, SIAM J. Matrix Anal. Appl., 28 (2007), pp. 623–633. [11] S. Delvaux and M. Van Barel, *Eigenvalue computation for unitary rank structured matrices*, J. Comput. Appl. Math., 213 (2008), pp. 268–287. [12] ———, *Unitary rank structured matrices*, J. Comput. Appl. Math., 215 (2008), pp. 268–287. [13] P. J. Eberlein and C. P. Huang, *Global convergence of the QR algorithm for unitary matrices with some results for normal matrices*, SIAM J. Numer. Anal., 12 (1975), pp. 97–104. [14] A. Edelman, Private communication, June 2014. [15] S. M. Fallat, M. Fiedler, and T. L. Markham, *Generalized oscillatory matrices*, Linear Algebra Appl., 359 (2003), pp. 79–90. [16] J. G. F. Francis, *The QR transformation: a unitary analogue to the LR transformation. I.*, Comput. J., 4 (1961), pp. 265–271. [17] ———, *The QR transformation. II.*, Comput. J., 4 (1962), pp. 332–345. [18] L. Gemignani, *A unitary Hessenberg QR-based algorithm via semiseparable matrices*, J. Comput. Appl. Math., 184 (2005), pp. 505–517. [19] G. H. Golub and C. F. Van Loan, *Matrix Computations*, 4th ed., Johns Hopkins University Press, Baltimore, 2013. [20] W. B. Gragg, *Positive definite Toeplitz matrices, the Hessenberg process for isometric operators, and Gaussian quadrature on the unit circle*, in Numerical Methods of Linear Algebra, E. S. Nikolaev, ed., Moscow University Press, Moscow, 1982, pp. 16–32. [21] ———, *The QR algorithm for unitary Hessenberg matrices*, J. Comput. Appl. Math., 16 (1986), pp. 1–8. [22] ———, *Positive definite Toeplitz matrices, the Arnoldi process for isometric operators, and Gaussian quadrature on the unit circle*, J. Comput. Appl. Math., 46 (1993), pp. 183–198. [23] ———, *Stabilization of the uhqr-algorithm*, in Advances in Computational Mathematics, Proc. of the Int. Symposium on Computational Mathematics, Z. Chen, Y. Li, C. A. Micchelli, and Y. Xu, eds., Lecture Notes in Pure and Applied Mathematics, 202, Dekker, New York, 1999, pp. 139–154. [24] W. B. Gragg and L. Reichel, *A divide and conquer method for unitary and orthogonal eigenproblems*, Numer. Math., 57 (1990), pp. 695–718. [25] M. Gu, R. Guzzo, X.-B. Chi, and X.-Q. Cao, *A stable divide and conquer algorithm for the unitary eigenproblem*, SIAM J. Matrix Anal. Appl., 25 (2003), pp. 385–404. [26] R. Killip and I. Nenciu, *Matrix models for circular ensembles*, Int. Math. Res. Notices, 2004 (2004), pp. 2665–2701. [27] T. Mach, M. Van Barel, and R. Vandebrouil, *Inverse eigenvalue problems linked to rational Arnoldi, and rational nonsymmetric Lanczos*, J. Comput. Appl. Math., 272 (2014), pp. 377–398. [28] T. Mach and R. Vandebrouil, *On deflations in extended QR algorithms*, SIAM J. Matrix Anal. Appl., 35 (2014), pp. 559–579. [29] H. Nguyen, O. Nguyen, and V. Vu, *On the number of real roots of random polynomials*, Preprint on arXiv, 2014, http://arxiv.org/abs/1402.4628. [30] V. F. Pisarenko, *The retrieval of harmonics from a covariance function*, Geophys. J. Roy. Astron. Soc., 33 (1973), pp. 347–366. [31] B. Simon, *CMV matrices: five years after*, J. Comput. Appl. Math., 208 (2007), pp. 120–154. [32] M. Stewart, *Stability properties of several variants of the unitary Hessenberg QR-algorithm*, in Structured Matrices in Mathematics, Computer Science and Engineering II, V. Olshevsky, ed., vol. 281 of Contemporary Mathematics, American Mathematical Society, Providence, 2001, pp. 57–72. [33] ———, *An error analysis of a unitary Hessenberg QR algorithm*, SIAM J. Matrix Anal. Appl., 28 (2006), pp. 40–67. [34] R. Vandebrouil, *Chasing bulges or rotations? A metamorphosis of the QR-algorithm*, SIAM J. Matrix Anal. Appl., 32 (2011), pp. 217–247. [35] R. Vandebrouil and D. S. Watkins, *An extension of the QZ algorithm beyond the Hessenberg-upper triangular pencil*, Electron. Trans. Numer. Anal., 40 (2012), pp. 17–35. http://etna.mcs.kent.edu/volumes/2011-2020/vol40/abstract.php?vol=40&pages=17-35 [36] ———, *A generalization of the multishift QR algorithm*, SIAM J. Matrix Anal. Appl., 33 (2012), pp. 759–779. [37] T. L. Wang and W. B. Gragg, *Convergence of the shifted QR algorithm, for unitary Hessenberg matrices*, Math. Comp., 71 (2002), pp. 1473–1496. [38] ———, *Convergence of the unitary QR algorithm with unimodular Wilkinson shift*, Math. Comp., 72 (2003), pp. 375–385. [39] D. S. Watkins, *Some perspectives on the eigenvalue problem*, SIAM Rev., 35 (1993), pp. 430–471. [40] ———, *Fundamentals of Matrix Computations*, 3rd ed., Wiley, Hoboken, 2010.
GUIDANCE TO FRESHLY REGISTERED MEDICAL GRADUATES 1. Hospital-associated infections: Prevention and Control 2. Safeguarding against medicolegal issues 3. Patient counseling 4. National Health Programmes Editor-in-Chief P S Shankar Members of Editorial Board M.K. Sudarshan Ranjan Pejaver Swarna Rekha Bhat Karnataka Medical Council, Bangalore Introduction Microbes particularly bacteria and viruses have played havoc with human life since time immemorial. The discovery of antimicrobials had a significant impact on the control of bacterial infections along with prevention of a few dreaded bacterial and viral infections with the introduction of vaccines. However, the irrational use of antimicrobials and the lack of newer effective drugs have led to the development of multidrug resistant bacteria leaving fewer therapeutic options for those patients infected with multidrug resistant strains particularly in health care settings which is an ideal niche for breeding of these bugs. It is therefore essential to adopt stringent infection control measures in the health care establishments to prevent the spread of the drug-resistant strains and thereby reduce the morbidity and mortality associated with these infections. Hospital acquired infection (HAI) also called health care associated infection (HCAI) or nosocomial infection is an infection acquired by a person in a hospital and was not present or incubating at the time of admission to the hospital. The disease may be due to the infectious agent or its toxins and usually manifest after 48 hours following admission or after discharge from the hospital. Risk factors for acquisition of HAI: 1. Prolonged stay in intensive care units (ICUs), burns or trauma care units, etc. 2. Invasive procedures for diagnostic or therapeutic purpose 3. Indwelling devices eg. I.V catheter, urinary catheter, endotracheal tube, etc. 4. Prolonged use of broad spectrum antibiotics, steroids or immunosuppressive agents. **Source of Infection:** 1. Contaminated hands of health care workers (HCWs) 2. Inanimate objects in the vicinity 3. Contaminated medications eg. Eye drops, I.V fluids, etc. 4. Contaminated instruments and antiseptic lotions, etc. **Routes of infection:** 1. Contact with skin (Percutaneous) or Mucous membrane 2. Inhalation of airborne droplet nuclei. **Common types of HAI:** 1. Catheter-associated urinary tract infections (CA-UTIs) 2. Catheter-associated blood stream infections (CA-BSIs) 3. Surgical site infections (SSIs) 4. Ventilator Associated Pneumonia (VAP) **Most common pathogens associated with HAI:** 1. Methicillin-resistant *Staphylococcus aureus* (MRSA) 2. Methicillin-resistant *Staphylococcus epidermidis* (MRSE) 3. Vancomycin Resistant Enterococci 4. ESBL Producing Gram-negative bacilli 5. *Mycobacterium tuberculosis* 6. *Candida* species 7. *Aspergillus* species 8. Human immunodeficiency virus (HIV), Hepatitis B virus (HBV), Hepatitis C virus (HCV) 9. Herpes viruses: H.simplex, Varicella zoster **Standard Precautions** 'Standard Precautions' are safety practices to be followed in all health care settings and is based on the assumption that every patient is potentially infectious that include blood, body fluids, secretions and excretions except sweat. Non-intact skin and mucous membranes may contain transmissible infectious agents. The practice of standard precautions contributes to significant decrease in HCAI. The major components of standard precautions are as follows. 1. **Hand Hygiene:** The hands of the health care workers are important vehicles for transmission of infectious agents and therefore hand hygiene is of utmost importance in the control of HAI. Different types of hand hygiene are practiced as per the situation. These practices remove or reduce the transient and/or resident bacterial flora of the hands thus reducing the transmission of the potentially infectious agents. A simple 'hand wash' with plain soap and water helps to remove the dirt and organic matter from the hands and is sufficient for routine noninvasive contacts with the otherwise healthy patients before and after contact. 'Surgical hand wash' requires the use of a medicated soap and water for preoperative preparation of the surgeon's hand. 2. **Hand rub:** Hand rub is the process of disinfection of hands by application of alcohol based compounds for quick and in between two patient contacts as in ICU as a practically convenient method. 3. **Personal Protective Equipment (PPE):** The use of PPE protects the HCW and the patient from cross infection. The type of PPE used varies with the situation. a) **Gloves:** Clean gloves act as an important mechanical barrier and protect the HCWs hand from being contaminated with potential infectious material. Some of the applications include, use by Phlebotomists, Dental surgeons for performing an oral cavity examination, Surgeons performing per rectal and Gynecologists per vaginal examination. b) **Sterile gloves:** These are used for all invasive procedures which come in direct contact with potentially infectious substances such as blood, body fluids, tissues, etc. of the patient, e.g. invasive procedures like urinary catheterization, surgical procedures, etc. c) **Gowns / Aprons**: These are used whenever contact with blood or body fluid is a possibility as in the operation theatres or other invasive procedures. A non-permeable plastic gown may be necessary in addition to the absorbent gown where huge blood spills are anticipated. Gowns / Aprons should be changed in between two patients or when visibly soaked with blood or body fluids. Donning of surgical gowns for the entry into ICU as a routine is not required. d) **Masks**: Face masks are to be worn in operation theatre and in wards / rooms with patients suffering from respiratory tract infections, as in the case of patients with pulmonary tuberculosis or whose respiratory or oropharyngeal secretions are infective. e) **Cap**: Caps are mostly used in operation theatres by HCW to protect the patient from being infected. f) **Eye shield**: Eye Shields to be worn by the HCW when anticipating a blood or body fluid spill. eg. Dental surgeons during manipulations in the oral cavity. **Transmission based precautions** These are indicated when standard precautions alone would not suffice for control of the spread of infectious agents. **Airborne precautions** **Airborne infection isolation rooms (AIIRs)** Use of special air handling and ventilation systems is required to contain the spread of airborne infectious agents such as M. tuberculosis, spores of certain fungi, varicella virus etc. from patients that can remain viable over a period of time and distance in the air. Patients should preferably be kept in single isolation rooms under negative pressure and instructed to use disposable face masks while coughing or sneezing. HCWs should use higher level respirator masks while entering rooms of patients with highly infectious and virulent pathogens like severe acute respiratory syndrome (SARS), corona virus, H1N1 influenza virus, viral agents of hemorrhagic fevers like Ebola virus. Positive pressure ventilation, directed room airflow, High-efficiency particulate air (HEPA) filtration of incoming air are some of the measures advocated for patients who have undergone hematopoietic stem cell transplant. **Contact precautions**: Indicated for prevention of transmission of infectious agents spread by direct or indirect contact. **Methods** 1. Cohorting of patients 2. Maintaining the minimum distance of 3 feet between adjacent patients. 3. Appropriate stringent disinfection of floor and material including the frequent contact points like bed railings, table, toilet, etc **Urinary Tract Infections (UTIs)** Catheter associated UTI (CA-UTI) are the most common type of HAI, UTI are considered as CA-UTI if the patient had an indwelling catheter at the time of or within 48 hours of onset of the event. Approximately 95% of UTI in hospitals is catheter-associated. The proximity of the urethral meatus to the anal sphincter in females, the passage of the catheter through a natural orifice and location in the bladder, deposition of Tamm-Horsfall proteins around the catheter facilitate the adherence of uro-pathogens and initiation of infection. Some of the important risk factors for CA-UTI are: - Female patient - Prolonged catheterization - Diabetes mellitus - Severe underlying diseases - Elderly patient - Poor catheter care **Prevention of CA-UTI** - Aseptic technique of catheterization - Proper care of the catheter and collection bag - Use of the narrowest size of catheter as possible - Ensuring dependent drainage - Minimizing the duration of catheterization Use of closed drainage system Use of silver impregnated or antibiotic catheter as indicated by the duration and risk. Use of condom catheters in male Use of systemic antimicrobials **Blood Stream Infections (BSIs)** Vascular catheterization has become an inevitable procedure as a part of patient care particularly in ICUs and is a known risk factor for catheter associated blood stream infections (CA-BSIs). CA-BSI is defined as bacteremia or fungaemia in a patient who has an intravascular device and a positive result of culture of blood samples obtained from peripheral vein with clinical features of infection and no apparent source of infection except the catheter. **Risk factors for CA- BSI** - Severe underlying illness - Loss of skin integrity - Plastic catheters - Central catheters - Prolonged catheterization - Inadequate care of catheter site **Common agents of (CA- BSI)** - Coagulase negative staphylococci - S.aureus - Candida sp. - Enterococcus sp. - Pseudomonas aeruginosa and - Serratia marcescens **Prevention** - Hand hygiene plays an important role. Hand washing before and after insertion and subsequent contacts with the insertion site. - Use of 2% chlorhexidine as skin disinfectant - Strict aseptic practices - Avoiding unnecessary manipulations - Proper education and training of HCWs involved in the care of such points. **Surgical Site Infections (SSIs)** Surgical site infections constitute about 20% of HAI. SSI is defined as infection of the surgical site that occurs within 30 days of the surgical procedure or within one year of an implant or foreign body such as prosthetic heart valve or joint prosthesis. Most of the SSI results from contamination of the surgical wound with patient's own flora or that of the HCW. or the environment in the operating room. Infection may manifest during hospitalization or after discharge. The common clinical features of SSIs are localized pain, redness and discharge. Most common bacterial agents of SSI are S.aureus, Esch.coli, Klebsiella, Proteus sp, Pseudomonas sp. Drug resistant pathogens like MRSA and ESBL producing Gram negative bacilli have become more common. Outbreaks have occurred following use of contaminated adhesive dressings. Elastic bandages, contaminated antiseptic lotion SSI can be superficial, deep or organ or space involving any organ or space. Risk factors Very old or very young age Poor nutritional status Uncontrolled diabetes Smoking Use of steroids Obesity Co-existing morbidity Colonization of carrier Prolonged preoperative stay Preoperative shaving within 24 hours of surgery Prevention Strict hand hygiene measures and use of proper surgical attire. Use of appropriate antimicrobials depending on the site and type of surgery Cefazolin provides adequate coverage for most clean contaminated wounds Cefoxitin is preferred for surgery on distal intestinal tract Aztreonam is a suitable alternative for cephalosporin Metronidazole or clindamycin should also be added for coverage of anaerobes Maintenance of positive pressure in the operating rooms Use of HEPA filters Optimum room temperature of 20° to 22°C Unidirectional air flow Use of appropriate drains Debridement of devitalized tissues Effective haemostasis Adequate post-operative care Periodic surveillance of operation suites Hospital acquired pneumonia (HAP) Pneumonia is the second most common HAI and it carries a high morbidity and mortality. Ventilator associated pneumonia (VAP) is the specific type of HAP that occurs 48 hrs after initiation of mechanical ventilation. The most important risk factor for HAP is prolonged mechanical ventilation. Other risk factors include: - Prolonged administration of broad spectrum antimicrobials - Underlying chronic lung disease - Insertion of nasogastric tube - Surgical procedures involving head, neck and thorax - Co-morbidities The first step in the pathogenesis of HAP is colonization of oropharynx with resistant pathogens and subsequent translocation to the lower respiratory tract. Most of the HAPs are of bacterial origin. Early onset HAP is usually caused by antimicrobial sensitive pathogens while late onset HAP is usually caused by multidrug resistant pathogens such as *Pseudomonas* species, *Acinetobacter* species, and *Staphylococcus aureus*. About 40% are polymicrobial. Prevention Selective decontamination of digestive tract (SDD) by local administration of antimicrobial agents as polymyxin / colistin, aminoglycosides, quinolones coupled with amphotericin B or nystatin prevents colonization of oropharynx with potential pathogens. - Frequent mouth wash preferably with an antiseptic and brushing of teeth - Semi recumbent position unless contraindicated - Enteral feeding as soon as the patient's condition permits - Appropriate care of devices used in mechanical ventilation - Avoidance of invasive ventilation where feasible - Use of silver coated endotracheal (ET) tubes - Use of orotracheal or orogastric tubes in preference to nasogastric tube - Avoidance of prolonged nasal intubation - Avoidance of frequent re-intubation Bundle approach to prevention of VAP Ventilator bundle is defined as a group of preventive interventions that when executed together result in a better outcome than when implemented individually. The four components of the bundle are 1. Elevation of the head end of the bed by 30 - 45 °. 2. Prophylaxis for deep venous thrombosis and peptic ulcer disease. 3. Daily interruption of sedation 4. Daily assessment of feasibility to extubate. Prompt and appropriate use of antibiotics improves the outcome of VAP; Monotherapy may be used in patients with no risk factor for multi-drug resistant (MDR) pathogens, and infections caused by Gram positive bacteria. Combination therapy is to be preferred for Gram negative pathogens and in the presence of risks factors for MDR pathogens. **Prevention of Infection in immuno-compromised (IC) hosts & special situations** **Burns wound infections** Isolation in single rooms with filtered air at positive pressure into rooms with exhaust to exterior. - Stringent hand hygiene measures - Donning of appropriate PPE - Scrupulous environmental cleanliness - Prohibiting HCW with skin or throat infections **Dermatology wards** Isolation of patients with sepsis Isolation of patients with desquamating lesions Exhaust ventilation in dressing rooms **Dialysis units** Patients are at increased risk of infection from blood borne pathogens - Dialysis fluid carries high risk of bacterial contamination - All patients should be screened for HIV, HBV, and HCV - Standard precautions to be followed during dialysis - Subcutaneous arteriovenous (AV) fistulae to be preferred. - Strict aseptic techniques and environmental disinfection. **Transplant recipients** Screening of patients for various infections before and after transplantation Appropriate antimicrobial prophylaxis Standard precautions especially hand washing Use of well cooked food Avoiding formation of water aerosols **Immuno -compromised patients** IC patients are at an increased risk of acquiring variety of infections due to impaired humoral and cellular immunity Prophylactic antimicrobials to be administered as indicated. Standard precautions to be practiced Maintaining environmental hygiene. **Laboratory acquired infections** Laboratory personnel are at an increased risk for infections due to frequent handling of potentially infectious specimens both by direct contact, inhalation of aerosols or injuries with sharps and spills. Provide training and education about safety measures to be practiced by the laboratory staff Covering of open wounds with sterile dressing Use of PPE like gloves, mask as indicated Prophylactic vaccination for Hepatitis B Post exposure prophylaxis for HBV and HIV as indicated Hand hygiene Use of appropriate safety cabinets Proper exhaust ventilation **Biomedical Waste Management** Hospital infection control is also dependent on proper segregation, disinfection and disposal of all biomedical waste generated in the health care setting. Inappropriate management of the hospital waste is a potential threat to the patients, HCW, and the community at large. Every institution should formulate a policy for safe disposal of hospital waste. Colour coded bins for each category of waste is to be provided at all points of waste generation. All infectious sharps should be disinfected at the point of generation before discarding into the bins. Appropriate PPE like leather gloves (for handling sharps), masks, gowns, etc. should be worn by the HCWs. Plastic and rubber material should not be incinerated. All categories of waste should be disposed as per the standard protocols and guidelines issued by the health authorities. Disposable items should not be reused. **Purpose of surveillance of nosocomial infections** “Good surveillance does not necessarily ensure the making of the right decisions, but it reduces the chances of wrong ones” **Alexander. D. Langmuir** The purpose of surveillance of nosocomial infections is to reduce the incidence of HAIs and thus to reduce the associated morbidity, mortality, and costs. Before beginning surveillance activities it is essential to develop a clear plan. It should address 1) What questions are being asked 2) How infections are to be defined 3) How the data are to be collected, stored, retrieved, summarized and interpreted 4) How to feed the results back to frontline practitioners 5) How to use the information to bring about change? Prevention of nosocomial infections is the responsibility of all individuals and services providing health care. Everyone must work cooperatively to reduce the risk of infection for patients and staff. A yearly work plan to assess and promote good health care, appropriate isolation, sterilization, and other practices, staff training, and epidemiological surveillance should be developed. √ An Infection Control Committee should include wide representation from relevant departments viz. management, physicians, other health care workers, clinical microbiology, pharmacy, central supply, maintenance, housekeeping and training services. **Minimal Requirements for Surveillance** 1. Monitor infection patterns (sites, pathogens, risk factors, location within the facility) 2. Detect changes in the patterns that may indicate an infection problem 3. Direct the rapid implementation of control measures 4. Monitor antibiotic use and resistance 5. Provide the staff with exactly the information they need in order to improve infection prevention practices **Operating theatres** Modern operating rooms which meet current air standards are virtually free of particles larger than 0.5 µm (including bacteria) when no people are in the room. Activity of operating room personnel is the main source of airborne bacteria, which originate primarily from the skin of individuals in the room. The number of airborne bacteria depends on eight factors. 1. Type of surgery 2. Quality of air provided 3. Rate of air exchange 4. Number of persons present in operating theatre 5. Movement of operating room personnel 6. Level of compliance with infection control practices 7. Quality of staff clothing 8. Quality of cleaning process Conventional operating rooms are ventilated with 20 to 25 changes per hour of high-efficiency filtered air delivered in a vertical flow. High-efficiency particulate air (HEPA) systems remove bacteria larger than 0.5 to 5 µm in diameter and are used to obtain downstream bacteria-free air. The operating room is usually under positive pressure relative to the surrounding corridors, to minimize inflow of air into the room. For operating theatres, a unidirectional clean airflow system with a minimum size of 9 m² (3 m x 3 m) and with an air speed of at least 0.25 m/s protects the operating field and the instrument table. This ensures instrument sterility throughout the procedure. It is possible to reduce the costs of building and maintaining operating theatres by positioning such systems in an open space with several operating teams working together. This is particularly adapted to high-risk surgery such as orthopedics, vascular surgery, or neurosurgery. Need for an infection control programme To develop and continually update guidelines for recommended health care surveillance, prevention, and practice. Develop a system to monitor selected infections and assess the effectiveness of interventions Harmonize initial and continuing training programmes for health care professionals Facilitate access to materials and products essential for hygiene and safety Encourage health care establishments to monitor health-care associated (nosocomial) infections and to provide feedback to the professionals concerned Infection control programme The important components of the infection control programme are: Basic measures for infection control, i.e. standard and additional precautions Education and training of health care workers Protection of health care workers, e.g. immunization Identification of hazards and minimizing risks Routine practices essential to infection control such as aseptic techniques, use of single use devices, reprocessing of instruments and equipment, antibiotic usage, management of blood/body fluid exposure, handling and use of blood and blood products, sound management of medical waste; effective work practices and procedures, such as environmental management practices including management of hospital/clinical waste, support services (e.g., food, linen), use of: Therapeutic devices Surveillance Incident monitoring Outbreak investigation Infection control in specific situations Research. In addition to implementing basic measures for infection control, health care facilities should prioritize their infection control needs and design their programmes accordingly. Organization of an infection control programme As with all other functions of a health care facility, the ultimate responsibility for prevention and control of infection rests with the health administrator. The hospital administrator/head of hospital should: Establish an infection control committee which will in turn appoint an infection control team; and provide adequate resources for effective functioning of the infection control programme. Infection control committee An infection control committee provides a forum for multidisciplinary input and cooperation, and information sharing. The infection control committee is responsible for the development of policies for the prevention and control of infection and to oversee the implementation of the infection control programme. It should: Comprises of representatives of various units within the hospital that have roles to play (medical, nursing, engineering, housekeeping, administrative, pharmacy, sterilizing services and microbiology department); Elect one member of the committee as the chairperson (who should have direct access to the head of the hospital administration); Appoint an infection control practitioner (health care worker trained in the principles and practices of infection control, e.g. a physician, microbiologist or registered nurse) as Secretary. Meet regularly. (Ideally monthly, but not less than three times a year) Develop its own infection control manual(s) Monitor and evaluate the performance of the infection control programme The committee must have a reporting relationship directly to either administration or the medical staff to promote programme visibility and effectiveness. In an emergency (such as an outbreak), this committee must be able to meet promptly. It has the following tasks: To review and approve a yearly programme of activity for surveillance and prevention; To review epidemiological surveillance data and identify areas for intervention; To assess and promote improved practice at all levels of the health facility; To ensure appropriate staff training in infection control and safety management, provision of safety materials such as personal protective equipment and products; and Training of health workers. **Infection control team** The infection control team is responsible for the day-to-day activities of the infection control programme. Health care establishments must have access to specialists in infection control, epidemiology, and infectious disease, including physicians and infection control practitioners; the infection control team has appropriate authority to manage an effective infection control programme. The infection control team is responsible for the day-to-day functions of infection control, as well as preparing the yearly work plan for review by the infection control committee and administration. These teams have a scientific and technical support role, e.g. surveillance and research, developing and assessing policies and practical supervision, evaluation of material and products, the overseeing of sterilization and disinfection, ensuring sound management of medical waste and the implementation of training programmes. The infection control team should: Consist of at least an infection control practitioner who should be trained for the purpose; Carry out the surveillance programme Develop and disseminate infection control policies Monitor and manage critical incidents Coordinate and conduct training activities **Recommended HIV Post-Exposure Prophylaxis for Percutaneous Injuries** | Exposure Type | HIV-Positive Class 1 | HIV-Positive Class 2 | Source of Unknown HIV Status* | Unknown Sources | HIV Negative | |---------------|----------------------|----------------------|------------------------------|-----------------|-------------| | Less severe | Recommend basic 2-drug PEP | Recommend expanded 3-drug PEP | Generally, no PEP warranted; however, consider basic 2-d rug PEP* for source with HIV risk factors’ | Generally, no PEP warranted; however, consider basic 2-drug PEP* in settings in which exposure to HIV-infected persons is likely | No PEP warranted | | More severe | Recommend expanded 3-drug PEP | Recommend expanded 3-drug PEP | Generally, no PEP warranted; however, consider basic 2-d rug PEP* for source with HIV risk factors’ | Generally, no PEP warranted; however, consider basic 2-drug PEP* in settings in which exposure to HIV-infected persons is likely | No PEP warranted | ### Recommended HIV Post-Exposure Prophylaxis for Mucous Membrane Exposures and Non-Intact Skin Exposures | Exposure Type | Infection Status of Source | |---------------|----------------------------| | | HIV-Positive Class 1- | HIV-Positive Class 2 | Source of Unknown HIV Status* | Unknown Sources- | HIV Negative | | Small volume | Consider Basic 2-drug PEP | Recommend basic 2-drug PEP | Generally, no PEP warranted | Generally, no PEP warranted | No PEP warranted | | Large volume | Recommend basic 2-drug PEP | Recommend expanded .<:3-drug PEP | Generally, no PEP warranted; however, consider basic 2-drug PEP for source with HIV risk factors! | Generally, no PEP warranted; however, consider basic 2-drug PEP in settings in which exposure to HIV-infected persons is likely | No PEP warranted | ### Previously vaccinated | Known responder | No treatment | No treatment | No treatment | |-----------------|--------------|--------------|--------------| | Known non-responder | HBIG x 1 and initiate revaccination or HBIG x 2 | No treatment | If known high-risk source, treat as if source were HBsAg positive | | Antibody response unknown | Test exposed person for anti-HBs. If adequate, no treatment is necessary. If inadequate, administer HBIG x 1 and vaccine booster | No treatment | Test exposed person for anti-HBs. If adequate, no treatment is necessary. If inadequate/ administer vaccine booster and recheck titre in 1-2 months | ### Recommended post - Exposure prophylaxis for exposure to Hepatitis B virus | Vaccination and Anti-body Response Status of Exposed Worker | Treatment | |-------------------------------------------------------------|-----------| | | Source HBsAg Positive | Source HBsAg Unknown or not Available for Testing | | Unvaccinated | HBIGx 1 and initiate HB vaccine series | Initiate HB vaccine series | Initiate HB vaccine series | ### References and recommended reading 1. Patwardhan N, Hospital associated infections: Epidemiology; prevention & control; 1sted. New Delhi. Jaypee Brothers 2006 2. Surgical care of the district hospital: WHO 1sted. New age International 3. Mayhall GC, Hospital epidemiology and infection control. 3rded, Lippincott Williams & Wilkins 4. Ducel G Prevention of hospital acquired infection 2nd ed. WHO 2004 SAFEGUARDING AGAINST MEDICO-LEGAL ISSUES Sectional Editor Shankar PS Emeritus Professor of Medicine KBN Institute of Medical Sciences Gulbarga Chandrashekar T N Director, Chamarajanagar Institute of Medical Sciences, Chamarajanagar Formerly Professor and Head, Department of Forensic Medicine, Mysore Medical College and Research Institute, Mysore Rajendra N Kagne Professor and Head Department of Forensic Medicine Sri Manakula Vinayagar Medical College Puducherry 605 107 Rajesh Sangram Professor and Head, Department of Forensic Medicine Raichur Institute of Medical Science Raichur Contents 1. Professional negligence (Malpractice) Rajesh Sangram 2. Medico-legal aspects in Emergency Rajesh Sangram 3. Death and its Medicolegal Aspects Chandrashekar TN 4. Consent in Medical Practice - A Guide to Registered Medical Practitioners Rajendra N Kagne, Ananda Reddy 5. Visceram tissue and body fluids: Preservation and forwarding procedures Rajendra N Kagne, Ananda Reddy Contributors Ananda Reddy Assistant Professor Department of Forensic Medicine, PROFESSIONAL NEGLIGENCE (MALPRACTICE) Rajesh Sangram Doctor and Society The responsibility of medical professional has grown due to rising demand from patients for medical help. Patients are better informed about their health and expect their doctors to make decisions with them and not for them. A doctor is required to make decisions based on an unambiguous estimate of the problem. Patients approach the physician with their ailments, for which he has to provide a diagnosis and undertake treatment. This works well in practice. Often, clinical picture is ambiguous, making it difficult for physician to reach a definitive conclusion. In such a situation, the possibility of a mistake is real and is a common professional hazard. Rather than accepting the ambiguity of certain clinical situations and explaining it to the patient, doctors are often pressurised to make a definitive decision in unclear circumstances, situations, which actually demands a probabilistic inference due to incomplete and fragmentary nature of information. They are often discussed in terms of clinical certainty, forcing errors. No human being is infallible, and in the present state situations even the specialist may be at fault in detecting the true nature of the disease. A practitioner can only be held liable in this respect if diagnosis is so palpably wrong as to prove negligence, i.e., if his mistake is of such a nature as to imply an absence of reasonable skill and care on his part. Reasonable skill is equated with ordinary or average level of skill in the profession. In medico-legal cases, however, part of the problem lies in the legal connotation of the word "negligence". The failure of a doctor and hospital to discharge their obligation is in a civil wrong, called tort in law, a breach of which attracts judicial intervention by way of awarding damages. Protection to Doctors In this era of commercialization of the profession, pontifications about "noble" profession and "sacred" doctor-patient relationship bring under the IPC to doctors for acts which may result in death or hurt for acts done in "good faith". However, "good faith" has been defined in section 52 with "due care and attention". Medical malpractice is not merely the negligence on the part of the care giver; a conscious decision of the care giver; to offer and/or force a product, procedure or investigation upon a patient for monetary gain either personally or for the institution comes under the definition under 'malpractice'. There could always be deficiency of service inherent in every profession and the nature and extent of deficiency or efficiency is governed by the circumstances, qualifications and experience of the dispensing professional as well as the availability of gadgets and convenience at hand to the attending doctor. The court has observed that the service which medical profession renders is probably the noblest of all and hence there is a need to protect doctors from unjust prosecution. Even a minor lapse on the part of doctor is blown out of proportion, canceling out the enormous amount of good work the doctor might have done silently. Looking at the component of negligence-of-duty and resulting damage, the court has repeatedly observed that it is not necessary for every professional to possess the highest level of expertise in that branch which he practices. In an acceptable standard of conduct; the competence is to be judged by the lowest standard that would be regarded as acceptable. The court has observed that the standard is that of the reasonable average and the law does not require of a professional man that he be a paragon combining the qualities of polymath and prophet. Bolam test The classical statement of law in Bolam case (1957) has been widely accepted as decisive of the standard of care required both of professional men in general and medical practitioners in particular and holds good in its applicability in India. A medical professional to be prosecuted for negligence under criminal law, there should be evidence that he did something or failed to do something, which a medical professional, "in his ordinary senses and prudence would have done or failed to do". The apex court has observed that "a simple lack of care, an error of judgment or an accident is no proof of negligence". A private complaint against a doctor will be entertained only if the complainant is able to furnish prima facie evidence before the court. **Civil cases of negligence** Civil cases pertain to disputes between two or more persons regarding wrong or inadequate treatment, wrong diagnosis and failure to keep professional secrecy. When a patient sues a doctor in civil courts it is mainly for compensation. - Due to the injury or death of the patient or as the case may be, caused because of the negligence of the doctor, or - When a doctor files a civil suit for realization of his professional fees from the patient or his relatives who refuse the same on the grounds of malpractice. Examples of civil negligence: - Unnecessary treatment - Wrong diagnosis - Prolonged treatment - Duty(to warn about possible side-effects) not discharged - Treatment leading to further complications A 'causation' means 'to bring about'. In order to obtain compensation in a case of medical negligence, it is not sufficient to prove that negligence has occurred, but also that the negligence was the cause of the damage. The more proximate the causation (proximate cause) to the damage, the greater is the chance of succeeding in a claim for compensation. The more remote the causation (remote cause), the lesser is the chance of success in getting compensation. In order to succeed in a medical negligence case, the claimant must prove, on a balance of probabilities, that the doctor's breach if duty to care, i.e., negligence, caused the damage, and he has to show that: - The damage would not have occurred, but for the doctor's negligences or - The doctor's negligence materially contributed to, or materially increased the risk of injury. - Further, if the claim is that the doctor failed to disclose the risk involved in the treatment or surgery, and the risk actually occur, the claimant can raise a plea that had such risk been disclosed he would not have agreed to such a treatment or surgery. The great problem of alleged medical negligence lies in the continuum of 'standard of care' between actions that are accepted medical practice and those that constitute a lack of care. At the junction of these two extremes is a grey area of debatable clinical judgment where some doctors would act in one way whereas others would act, quite legitimately, in a different way. Claimants(patients) in clinical negligence actions have to demonstrate first that they owed a duty of care by their health care provider, second that there was a breach of the duty, and third, that they suffered harm as a result. - Inadequate notes, lost records, failing or muddled memories may all lead to an inability to rebut the claimant's case. - Keeping up-to-date is another important and related issue. - Unless basic systems are in place to deal with patient referral, follow up, completion of clinical records, clinical correspondence, reviewing test results and acting appropriately on abnormalities, all sorts of things can and do go wrong with potentially catastrophic effects for patients. - Operation without consent - Issuing wrong certificates or reports. It is important to note that ‘damage’ in the sense of injury or harm, is quite different from ‘damages’, which is the financial compensation awarded to a successful litigant (here it is a patient’s side). There is also the problem in putting a proper definition for error, as an acceptable description is yet to be evolved. **What the court says:** Supreme Court held that the Damocles’ sword of criminal prosecution should not be hanging constantly over medical practitioner’s head by making them liable for every instance of negligence. - A simple lack of care, error of judgment or accident is not a negligence - Error must be gross in nature - Doctor can’t be arrested in routine manner - Complaint won’t be entertained when there is credible opinion from another competent doctor, preferably from Government doctor in that branch of medicine. A Doctor can be prosecuted for causing death due to ‘rash and negligent act’ (304A) if his patient dies, but the doctor cannot be prosecuted for ‘culpable homicide not amounting to murder’ (304) IPC, which entails a higher punishment. While punishment for rash and negligence act is two years, a life sentence can be imposed for an offence under culpable homicide not amounting to murder. **Criminal cases of negligence** Criminal cases are related to violation of laws. In such cases, the guilty doctor is awarded with a punishment. It may be fine, imprisonment or even death sentence. In case of serious injury, the doctor may be charged under various sections of IPC; - Section 304A of IPC: causing death of any person by doing any rash or negligent act which does not amount to culpable homicide, which is punishable with imprisonment for a term which may extend to 2 years. - Section 336 of IPC: rash or negligent act endangering human life. - Section 337 of IPC: causing hurt to any person by doing any rash or negligent act as would endanger human life. - Section 338 of IPC: causing grievous hurt to any person by doing any rash or negligent act so as to endanger human life. Examples of criminal negligence - Injecting anesthetic in a fatal dosage or into wrong tissues. - Transfusing wrong blood. - Performing a criminal abortion. - Leaving instruments or sponges inside the part of the body operated upon. - Operation on wrong patient or wrong part There are many loopholes, variations and deficiencies in the knowledge and outcome of a treatment. It is unwise to expect that everything will go well in medicine according to the plan in every case. It is also true that the knowledge in medicine and its application is advancing so fast that every doctor cannot be an expert and be expected to offer the best expertise in every situation. Medicine is a highly codified body of knowledge and procedures of treatment are meticulously standardized. With this level of procedural consistency, the profession cannot claim that the law does not have the expertise to evaluate its performance. The evaluation consists only in seeing whether the doctor in the dock has gone by the book. The law is as competent to rule on a medical case as in the case of financial irregularity. Most malpractice cases are self-evident anyway, and the principle of *res ipsa loquitur* (literally, the issue shall speak for itself) may safely be applied. It is usually a case of a surgical oversight - the ubiquitous forceps problem- or the maladministration of anaesthesia. The law does not need technical skills to come to adequately comprehend such matters. **Elements of negligence** The necessary elements of an action founded on negligence are held to be; - A duty or obligation recognized by law requiring the person to conform to a certain standard of conduct for the profession of others against unreasonable acts. - A failure on the part of the defendant (doctor) to conform to the standard required. - A reasonable close casual correction between the conduct and the resulting injury. - Actual loss or damage resulting to the plaintiff (patient). **Right to life** The only way to resolve the problem of whether an act is truly negligent is by 'peer judgment' and this is the means by which most medical disputes are settled. The facts of the case are placed before experts in that particular specialty and their views sought. It is sufficient in this context to show only that a substantial number of doctors agree with the actions of the defendant (here it is a doctor's side); there is no need for unanimity of either condemnation or support. --- **MEDICO-LEGAL ASPECTS IN EMERGENCY CASES** **Rajesh Sangram** **Scenario in Emergency** Medico-legal problems in the practice of Medicine are common but relatively infrequent. Many-a-times, a patient accompanied by either parents or relatives, or friends enter breaking all the barriers uttering the words emergency so as to draw attention of the physician to be attended first leaving all the waiting patients in queue. This disturbing and unconvincing situation will be faced by all the practicing doctors at least once in their life time. The word "Emergency" means a sudden unexpected happening or sudden unforeseen occurrence or condition where there is a question of life and death. Neither Indian law nor the orders of the Supreme Court and various High Courts of India have defined medical emergency. Therefore the definition of medical emergency is still largely left to the discretion of medical professionals. It is an accepted practice that injured and critically ill patients are attended on priority by the doctors to save life. Often there is reluctance on the part of doctors to attend to the emergency needs of patients who, in medical jargon are "Medico-legal cases". This unwillingness is largely due to medical professionals with the instinct to evade the inconvenience associated with subsequent legal proceedings. Many patients come to a doctor believing him to be "God". This attitude must change. As of now people's expectations are sky high, and they expect nothing short of a miracle. When the doctor is obviously unable to work this miracle, their God is found to have feet of clay and is thus abused. If doctors indicate on their hospital or nursing home board "24 hours emergency services available" make sure this is really the case. Otherwise it may amount to misrepresentation and make them liable if someone is not attended and suffers damage. In case doctors cannot always provide round-the-clock service always, though this may be possible on most of the days, it is better to avoid announcing 24 hours services etc. There are certain important ethical and legal aspects of emergency medical care that medical professionals' needs to be aware of and these are as follows: - The legal and ethical obligations of a medical practitioner to attend to the emergency medical needs of a patient are total, absolute and paramount. - Every doctor, either in a Government hospital or in private practice, is duty bound to immediately attend to and protect lives of injured victims brought before him. - It is the constitutional obligation of the State to provide adequate medical services to the people. - The Indian Medical Council (professional conduct, etiquette and ethics) Regulations, 2002 unambiguously states that a medical professional should attend to a patient in an emergency. **Necessary aid** Head injuries are very common in the traffic accidents. The doctor who is first approached would start giving first aid and apply stitches to stop the bleeding. However, what is often seen is that doctors act with fear of facing legal proceeding do not give first aid to the patient, and instead tell him to proceed to the hospital by which time the patient may develop other serious complications. In cases of an accident, injury and emergency, after providing necessary first-aid, the patient is referred to the higher centre, but the patient dies during transport would not be the liability of the doctor. Rather, delay in referral by the doctor could constitute negligence. Remember, not to forget to inform the police if it is a medico-legal case. **Doctor in the court** Medical professionals harbour apprehensions about being witnesses facing police interrogation and having to repeatedly visit police stations and losing their valuable earning hours. Especially the private practitioners are under the wrong impression that emergencies which are mostly medicolegal cases are dealt with or are to be dealt with only by Government doctors. For the Government doctors there is no option but they are obliged to attend on medicolegal case (MLC). The private doctors usually refuse and refer such a case to a Government hospital as there is no authority which can compel any doctor to attend on any particular case unless there is a military regime. It is the duty of every human being to help others in case of emergency. This responsibility is accentuated in cases of medical profession and every attempt should be made to provide the patient emergency care required for his well being. No person shall be denied first aid and immediate management, once he walks into a clinic to the extent possible in that particular setup, irrespective of ability or inability to pay. The doctors are also reluctant to be a witness in a court of law as they may be required to attend the proceedings on multiple occasions, wait for a long time and sometimes face long and unnecessary cross examination. They prevent medical professionals from doing the needful when a person requires emergency treatment. To allay these apprehensions the Supreme Court held in Paramanand Katara. V. Union of India that "The police, the members of the legal profession, law court and everyone concerned will also keep in mind that a man in the medical profession should not be unnecessarily harassed for purposes of interrogation or for any other formalities and should not be dragged during investigation at the police station. Our law cases will not summon a medical professional to give evidence unless the evidence is necessary and even if he is summoned, attempt should be made to see that the men in this profession are not made to wait and waste time unnecessarily. It is also expected that where the facts are so clear it is expected that unnecessary harassment of the members of the medical profession either by way of requests for adjournments or by cross examination should be avoided". Correct observations made by the Supreme Court are not only gratifying but also make sense. The public needs to be educated about the fact driven by the court that no sensible professional would intentionally comment an act of omission which would request in loss or injury to the patient as the professional reputation is at stake. A single failure may cost the doctor dear in his career; medical practitioner faced with emergency situation ordinarily tries his best to redeem the patient out of suffering. In an emergency or a critical case, it is the implicit duty of a noble profession to treat the injured person without waiting either for consent or for fees. The refusal to give treatment would even be violative of the provisions of the code of medical ethics and would constitute a deficiency in service. In a concurring judgment it said, 'when a man in a miserable state hanging between life and death reaches the medical practitioner either in a hospital run or managed by the state, public authority or a private person or a medical professional doing only private practice he is always called upon to rush to help such an injured person and to do all that is within power to save life. It is a duty coupled with human instinct which needs neither decision nor any code of ethics nor any rule or law'. **Triage and Emergency** Stedman's Medical Dictionary defines 'Triage' as the medical screening of patients to determine their relative priority for treatment; the separation of a large number of casualties, in military or civilian disaster medical care, into three groups. 1. Those who cannot be expected to survive even with treatment. 2. Those who will recover without treatment; and 3. Those who need treatment to survive. The doctor has the absolute right to decide which patient he would examine first and even out of turn, depending on the condition of the patient. Triage means allocation of injured patients into certain categories, a common scheme being as follows: 1. Critical: within seconds 2. Immediate: within minutes 3. Urgent: within the "golden hour" 4. Deferred: as soon as practical. **What the IPC says** Sections 80 and 88 of the Indian Penal Code (IPC) contains defenses for doctors accused of criminal liability. Under Section 80 (accident in doing a lawful act) nothing is an offence that is done by accident or misfortune and without any criminal intention or knowledge in the doing of a lawful act in a lawful manner by lawful means and with proper care and caution. According to section 88, a person cannot be accused of an offence if he/she performs an act in good faith for the other's benefit, does not intend to cause harm even if there is a risk, and the patient has explicitly or implicitly given consent. Section 92 of the IPC offers legal immunity for a registered medical practitioner to proceed with appropriate treatment even without the consent of the patient in an emergency, when the victim is incapable of understanding the nature of the treatment, or when there are no legal heirs to sign the consent. If the patient is conscious and refuses treatment without which the person might endanger his/her life, then the surgeon can inform the judicial magistrate and get the sovereign power of guardianship over persons under disability. In New India Assurance Co. Ltd. V Dr. Kritikumar S Shera case, it was held that there is a difference in the degree of care, caution and skill in normal times and in the care of an emergency, nobody can expect the same degree and amount of care, caution and skill. The amount of care, skill and caution expected of a reasonable and prudent medical practitioner may not be the same during an emergency. In *Amid Ali Shakir V St John's Medical College Hospital*, Bangalore, it was held that reasonable delay in shifting the accident victims to the operation theater because of the necessity to correct the shock is not negligent. **Recommendations** The three member commission, headed by Justice Mr. Jagannadha Rao, drafted a bill pertaining to the private hospitals and practitioner on accident victims and emergency patients; if implemented the following guidelines are to be followed by the doctors. a) The Hospital can't refuse the accident victim even on the ground that it was a medico-legal case. b) The bill also stipulates punishment for refusing to admit, treat or transfer a patient after emergency treatment to another hospital. c) The commission lays down the punishment of six months imprisonment along with fine of Rs. 10,000/- to the doctor or persons running the hospital if an emergency treatment is denied. d) The commission says doctor would ensure provision of sufficient medical support en route for an unharmed transit of patient from one hospital to another. e) In case ambulance is not available, then doctor will seek the help of police to transfer the patient. --- **DEATH AND ITS MEDICOLEGAL ASPECTS** *Chandrashekar TN* **Thanatology:** Thanatology is a branch of subject of Forensic medicine that deals with death in all its aspects. **Death:** Indian law defines death as permanent cessation or disappearance of all evidence of life at any time after live birth has taken place (Sec. 2 (b), Registration of Births and Deaths Act, 1969). has considered death as irreversible cessation of life and has classified it into two types - Somatic/systemic/clinical - Molecular/cellular **Somatic Death:** Somatic Death it is the Complete and irreversible stoppage of circulation, respiration and brain functions (Bishop’s tripod of life). Diagnosis of somatic death is difficult in conditions like suspended animation/apparent death. **Moment of death:** The moment at which brain stops to work is the moment of death rather than respiration or cardiac function. - Death is a process, and not an event. - Medical advances - Ventilators, heart lung bypass machine have given rise to concept of brain death. **Molecular Death:** Death of cells and tissues occurs individually. - Takes place in about 3-4 hours after stoppage of vital functions Different tissues die at different times. Nervous tissues die rapidly, muscles live up to 1-2 hours. Historically, medically and legally the concept of death was that of "Heart and Respiration Death". Heart Lung by pass machines, Ventilators and other devices, however have changed this medically in favour of new concept. "Brain Death" i.e. Irreversible Loss of brain functions. The determination of brain death has assumed importance for two reasons: 1. the ability to support vegetative functions for prolonged periods after brain death, and 2. the need of organs for transplantation. Transplantation of Human Organs Act (THOA) 1994 recognised and defined brain stem death. **Types of Brain Death** There are three types of brain death and they are: - Cortical or cerebral death - Brain stem death - Whole brain death a) **Cortical/cerebral death**: There is loss of power of perception by senses but brain stem is intact, so respiration continues and person goes into deep coma. It is caused by cerebral hypoxia, widespread brain injury or toxic conditions. b) **Brain stem death**: Brain stem death is the present criteria to diagnose death as adopted by UK and India. Cerebral cortex may be intact though it is cut off functionally by brain stem. There is loss of vital centers that control respiration and the ascending reticular activating system that normally sustains consciousness. Thus the victim is irreversibly comatose and incapable of spontaneous breathing. It is caused by raised intracranial pressure, cerebral oedema, intracranial hemorrhage etc. b) Whole brain death: Combination of cortical death and brain stem death. **Brain stem death** Criteria (Harvard criteria) - Unreceptivity and unresponsiveness: Deep unconsciousness with no response to external stimuli or internal need. Unresponsive to deep painful stimuli. - No movements and no spontaneous breathing - No reflexes - Flat isoelectrical electrocardiogram (EEG): Not essential but confirmatory **Diagnosis:** - Patient must be deeply comatose - The cause of coma should be established. - The cause must be irremediable structural brain damage. - Patient must be maintained on ventilator **Exclusions:** Where the patient is under the 1. Effect of drugs - Barbiturates, Benzodiazepines, Opium, Neuromuscular blocking Agents. 2. Core temperature of body below 35°C - Hypothermia 3. Severe metabolic abnormalities such as uraemia, diabetic coma and Endocrine disease like hypothyroidism Medically and legally the patient is considered dead when brainstem death has taken place. The same time should appear on death certificate. **Brain death needs to be certified by a Board of doctor's consisting of:** - Registered Medical Practitioner (RMP) in charge of hospital where brain death has occurred. An independent RMP -a specialist nominated by panel. A Neurologist / Neurosurgeon nominated by panel. RMP treating the patient. **Tests to be performed:** - Absence of brainstem reflexes -pupillary, corneal, vestibulocochlear, and gag reflex. - Apnoea Test. Before certifying brain stem death doctor should perform the tests twice with interval of time say 6 hours. **Transplantation of organs:** The organs can be removed from dead body within specified time. | Lung | Within 15-30 min. | |------|-------------------| | Heart | Within 1 hour. | | Liver | Within 15 min. | | Kidney | Within 45 min. | | Cornea | Within 2 hours. | | Skin & blood vessels | Within 2-4 hours. | | Bone | Within 6 hours. | **Modes of Death** The mode of death refers to the abnormal physiological state that existed at the time of death. According to Bichat, there are three modes of death depending upon the system most obviously involved, irrespective of what the remote cause of death may be. i. Coma ii. Syncope iii. Asphyxia **Manner of death** 1. Natural, and 2. Unnatural If death occurs exclusively from disease or ageing process, the manner of death is Natural. If Death occurs by injury or is hastened due to Injury in a person suffering from natural disease, the manner of death is unnatural or violent. Unnatural death may be suicidal, accidental, homicidal, undetermined or unexplained origin. **Mechanism of death** The mechanism of death refers to the physiological derangement or biochemical disturbance in relation to death. **Medical certification of cause of death (MCCD)** - Cause of Death is the disease or Injury responsible for starting the sequence of events, which are brief or prolonged and which produce death. - They are divided as follows: 1. **Immediate Cause** i.e. at the time of terminal event. eg: Septic shock, Trauma, Hemorrhagic shock etc. 2. **Antecedent cause** or **Basic Cause**: Pathological processes responsible for the death at the time of the Terminal event or prior to or leading to the event. eg: Gun shot wound of abdomen complicated by general peritonitis. 3. **Contributory Cause**: Pathological process involved in or complicating but not causing the terminal event. eg: existing Diabetes mellitus, hypertension, anemia etc. Role of physician in certification of cause of Death - It is obligatory for a medical practitioner who last attended the deceased, to issue a death certificate - Forward it to the registering authority. - Must verify all relevant facts, - Do utmost to arrive at the cause of death, - The cause of death is recorded according to international conventions; the sequence that being adopted by the WHO. - To be based only on clinical findings and not on extraneous factors, - Suspicion / unnatural death - certify death (not cause of death) and inform police, - Death certificate not to be withheld/ delayed or refused because of not having received his professional fees. Prerequisites for certification of cause of Death - Institutional doctors should fill Form No. 4 along with Form No. 2 - Non-Institutional doctors should fill Form No. 4(A) along with Form No. 2 Social aspects of certification of cause of death: Relatives may plead, persuade, pressurize, offer a price or threaten to issue death certificate. Legal Aspects of certification of cause of Death - Death certificate is a legal document which is a proof of death, - To be issued free of cost - Failure to provide death certificate and cause of death, Physician can be prosecuted under Section 39 Cr.P.C.,175 I.P.C. or 176 I.P.C. Ethical Aspects of certification of cause of Death - Preserve confidentiality except in cases of public interest (HIV / AIDS). Tips for issuing Death Certificate - Issue free of charge, - Don't delay issuing certificate, - Do utmost to arrive at the cause of death, - Take into consideration all your findings, - Cause of death should be arrived at only on the basis of findings and not on extraneous facts, - Do not write two or more conditions on a single line, - Write legibly to avoid being misread, - Do not use abbreviations to state the cause of death. - Issue the certificate if attended the patient within past 7 days prior to his / her death, - Issue a single copy of the certificate, - Retain a carbon / duplicate copy for future reference. - Do not sign blank certificate leaving the particular details to be filled by someone else. - Fill in the appropriate forms (as per Registration of Birth and Death Act 1969), - Never yield to plea, pressure, price, threat or to humanitarian grounds, - Suspicious / unnatural deaths - certify death and inform police. When you should not issue death certificate - Cause of death is not known - Unnatural deaths - Brought dead cases - A crime has been already registered by the police - The police has been already informed about the case Death within 24 hours of admission to casualty. Sudden deaths Suspect of starvation, exposure or neglect. Intra-or postoperative deaths and Suspicion of foul play Postmortem examination must be carried out to ascertain cause of death in above cases. In cases of death occurring in Police Custody, Prison, Children Home, Mental hospital, police firing etc. Magistrate’s inquest should be carried out before postmortem examination. **Sudden natural death:** Death is said to be sudden or unexpected when a person not known to have been suffering from any dangerous disease, injury or poisoning is found dead within 24 hrs after the onset of terminal illness (WHO). - Incidence is 10% of all deaths. - No period in life is exempt. - Aetiology: Cardiovascular problems (45-50%), Respiratory problems, (15-25%), CNS problems(10-15%), alimentary causes (5%) , Genitourinary causes (5%), 10% Mmscelaneous (10%), and obscure (5-10%) causes. --- **CONSENT IN MEDICAL PRACTICE** *Rajendra N Kagne, Ananda Reddy* **Introduction** Doctors practicing ethically and honestly should not have any reason for fear. Law whether civil, criminal or consumer, can only set the outer limits of acceptable conduct, i.e. Minimum standards of professional care and skill, leaving the question of ideal to the profession itself. In recent years there have been a number of malpractice suits based on lack of consent or inadequate consent from the patient for procedures used in treatment. The common meaning of consent is permission, whereas the law perceives it as a contract, that is, an agreement enforceable by law.\(^1\) One of the essential features of establishing a contract is consent, which means "an agreement, compliance or permission given voluntarily without any compulsion".\(^2\) The medical graduate (Registered Medical Practitioner) must know what is consent, its types, who can give consent, its relevance in medical practice, how to safeguard oneself from malpractice suits based on lack of consent or inadequate consent. Obtaining consent is not only an ethical obligation, but also a legal compulsion. Hence, it is necessary to understand the importance of consent in medical practice and its legal framework.. **Consent** Consent is an agreement, compliance or permission given voluntarily without any compulsion. The consent is valid only if it is given after knowing the nature and consequence of the consent and those of the act for which it is given.\(^3\) Types of Consent Consent can be implied or expressed (Verbal or written). **Implied Consent:** This is seen in routine medical practice and is quite adequate. Consent is implied in the mere fact that the patient comes to the physician with a problem or when a patient holds out his arm for an injection. The patient does not spell out his consent for treatment specifically. It is understood to have been given. The reason for this is that, the procedure of diagnosis and treatment is simple and straight forward, the risks negligible and uncommon, and the conduct of the patient implies willingness to undergo treatment. If there is slightest fear of a complication, the doctor should seek expressed consent to safeguard his interests. **Expressed Consent** This may be written or verbal. Any procedure beyond the routine physical examination, like operation, collection of blood, blood transfusion etc. needs expressed consent. Consent must be taken before the proposed act and not at that time of admission to the hospital. For major operations and diagnostic procedures, written consent should be obtained in the presence of a disinterested third party, such as a nurse or receptionist. The nature and consequence of the procedure should be explained to the patient before getting the consent. **Informed Consent:** In medical practice anything beyond the routine would require this type of consent. Here the doctor explains to the patient the ‘relevant details’ regarding the nature of his disease, the diagnostic procedures involved, the course and alternatives to the treatment proposed, risks involved and the prognosis. The relative chances of success or failure is explained so that the patient can take an intelligent decision after attaining a comprehensive view of the situation. This safeguards the interests of the doctor. The patient may be in dire need for treatment, but revealing the risks involved (the law of “full disclosure”) may frighten him to a refusal. This situation calls for the common sense and discretion of the doctor. What should not be revealed may at times be a problem. In such situations ‘Therapeutic privilege” is an exception to the rule of "full disclosure". The doctor may in confidence, consult his colleagues to establish that the patient is emotionally disturbed. Apart from this, it is good for the doctor to reveal all risks involved in confidentiality to one of the close relatives and involve them in decision making. Informed consent has now become a must in all operations, anaesthesia procedures and complicated therapeutic procedures. In the years to come, with the great advances in science and awareness of people regarding their rights with respect to consent, informed written witnessed consent can only acquire an added importance. **Emergency Doctrine** The emergency doctrine comes into play in situations where the patient will have to be treated without obtaining consent. An unconscious patient, non-availability of a relative or guardian, lack of time to contact them and the urgency of the situation are important factors which tolerate no delay in treatment. In such situations the ‘emergency doctrine’ comes into operation and law presumes that consent is deemed to have been given. It protects the doctor interests, giving him immunity from proceedings against him for damages, for negligence or assault (IPC ’92). **Loco Parents** In emergency situations involving children, when their parents or guardians are not available consent is taken from the people who are on the spot. For example a school teacher can give consent for treating a child taken acutely ill during a picnic away from hometown. Even if the parents refuse consent no blame will be attached to the surgeon for an operation done to save the life of a child. **Blanket Consent** An all-encompassing consent to the effect "I authorize so and so to carry out any test/procedure/surgery in the course of my treatment" is not valid. It should be specific to a particular event. If, consent is taken for microderm-abrasion, it cannot be valid for any other procedure like acid peel. Additional consent will have to be obtained before proceeding with the latter. If a consent form says that the patient has consented to undergo laser resurfacing by Dr. X, the procedure cannot be done by Dr. Y, even if Dr. Y is Dr. X's assistant, unless it is specifically mentioned in the consent that the procedure may be carried out by Dr. X or Dr. Y (or his authorized assistants). Blanket Consent is not legally valid. **Who can give consent?** A child above twelve years can give valid consent to suffer any harm which may result from an act done in good faith and for its benefit. Thus a child above 12 years can give valid consent for physical examination, diagnosis and treatment. A child under twelve years or an insane cannot give valid consent to suffer any harm which may occur from an act done in good faith and for its benefit (IPC 89). The consent of the parent or guardian should be taken. If they refuse, the doctor cannot treat the patient. A child’s agreement to medical procedures in circumstances where he or she is not legally authorized or lacks sufficient understanding of giving consent competently is called ‘assent’. Children are considered to give “assent” when they have sufficient competence to understand the nature, risks, and benefits of a procedure, but not enough competence to give fully informed consent. A person above 18 years can give valid consent to suffer any harm which may result from an act done in good faith and which is not intended or known to cause death or grievous hurt (IPC 87 and 88) Thus, if a surgeon operates on a patient in good faith and for his benefit, the surgeon cannot be held responsible if the operation ends fatally. **Relevance of consent in medical practices** **Nature of illness:** The nature of illness of a patient should not be disclosed to a third party without his consent. A doctor can disclose a secret without consent, if it is a privileged communication. A person undergoing trial has the right to prevent the doctor from disclosing his condition to a third party. Convicted persons have no such right and the doctor can disclose the matter to the authorities. **Operation and treatment:** The consent of a spouse is not necessary for an operation or treatment of the other. Even for gynecological operations required to safeguard her health, consent of the wife alone is sufficient. It is advisable to take the consent of the spouse if it involves danger to the life, impairment of sexual function or destruction of an unborn child. When an operation is made compulsory by law, for example, vaccination, no consent is necessary. **Discharge against medical advice:** It is unlawful to detain an adult patient in the hospital against his will. If a patient demands discharge against medical advice, this should be recorded and his signature obtained. **Professional negligence:** Consent is not a defense in professional negligence. **Medicolegal context:** In medico-legal cases where an examination is requested by the law, consent must be obtained, whether it is the victim or the assailant that has to be examined. Examination without consent amounts to assault. Examination may reveal findings, which when used in the process of investigations can damage the party examined. If later on the party is proved to be innocent, the damage sustained cannot be undone. This is why the right to deny consent for examination is generally given to the party. Here consent is of the informed type. It is also said that the examination findings may go against him and can be used as evidence in the court. **Insane person:** Consent is obtained from the parent/ guardian/ state/ relative (IPC 89). **Criminal cases:** Medical officer can examine an accused under arrest in a crime, without his consent when the request is made by a Police officer not below the rank of an S.I. If the person is not willing, reasonable force can be used. (Cr. P.C. 53, 1973) **Alcohol abuse:** Here the person should not be examined and blood, urine or breath should not be collected without his consent. If the person becomes unconscious and is incapable of giving consent to examination and treatment has to be carried out. The consent of the guardian or relatives, if available, should be taken. The findings should not be divulged to the police until after the subject regains consciousness and gives consent. When a person is deeply intoxicated and cannot comprehend the informed consent it is advisable to wait till he becomes sober and gives consent for divulging the findings to the authorities. **Child offenders:** Consent for examination is obtained from the parent or guardian. When the requisition is from the Magistrate, consent for physical examination is not required. **Marriage and conjugal obligations:** Marriage contract provides bilateral conjugal obligations for a sexual relationship. Therefore, in procedures like sterilization, artificial insemination etc. involving the genital organs of a married partner it is advisable to obtain informed consent from both the husband and wife. Failure in this situation may result in doctor being sued for damages for negligence. **Rape:** The victim’s consent is must and the examination shall be made only by, or under the supervision of, a female registered medical practitioner. (CrPC 53 & 54) **Pregnancy:** Sometimes the diagnosis of pregnancy is difficult, especially in the early months and the patient tries to conceal it. Here before examination the physician must obtain, preferably in writing the consent of the woman in the presence of witnesses. Without the consent, physician can be sued by civil action for damages and criminal for assault. Medical termination of pregnancy act (1971): Consent of the pregnant woman alone is sufficient provided she has attained the age of 18 years and is not a lunatic. Consent for committing a crime or illegal act such as criminal abortion is invalid, whether or not the act causes injury to the consenting party (IPC 91). **Delivery:** Consent of the party concerned, is required before examination for evidence in delivery. **Unconscious victim or assailant:** Examination findings can be divulged to the police only after the patient regains consciousness and consents for this disclosure. **Prisoner:** Prisoner can be treated forcibly without consent, in the interest of the society. **Inmates of hostel:** For treating an inmate of the hostel, consent is necessary if he is above 12 years. Within the age of twelve the Principal or Warden can give consent. If an inmate above 12 years refuses treatment and he is likely to spread the disease, he can be asked to leave. However, if he stayed on, he will be treated without consent. **Autopsy:** It is improper and illegal to perform an autopsy without proper consent or authorization. Medico-legal autopsies do not require consent. Here autopsy is done on authorization. The statutory enactment enables the state to order an autopsy in all suspicious and unnatural deaths. Clinical autopsy requires the consent of the surviving spouse or next of kin. Failure to get consent is grounds for a charge of mutilation of the deceased and the ‘hurt’ sustained by the legal heir of the deceased body (emotional trauma, mental anguish, mental hurt). If it is necessary to remove and retain part of the body for future study and examination specific consent must be obtained. **Tissue transplantation:** A living donor above 18 years, provided he is not mentally defective can give consent for removal of tissues from his body during life. Consent should be obtained in writing after having been given independent medical advice as to the risk. To remove tissues from the body after death, consent of the deceased should be obtained in writing at any time, or orally in the presence of two or more witnesses, during his last illness. Even if consent was given by the deceased during life, permission must be obtained from the person in possession of the body, before removal of tissues. (THOA-1994) How to safeguard oneself from malpractice suits Registered Medical Practitioner must follow following points to avoid malpractice suits based on lack of consent or inadequate consent 1. Take consent from a patient, or other valid authority, before undertaking any examination or investigation, providing treatment, or involving patients in teaching and research. 2. Discuss with their patients about their condition and treatment options in a way they can understand, and respect their right to make decisions about their care. 3. Get their consent as an important part of the process of discussion and decision making, rather than as something that happens in isolation. 4. Share the information in proportion to the nature of their condition, the complexity of the proposed investigation or treatment, and the seriousness of any potential side effects, complications or other risks. 5. Work with their patients on following principles to ensure good practice in making decisions. - Listen to patients and respect their views about their health - Discuss with patients what their diagnosis, prognosis, treatment and care involve - Share with patients the information they want or need in order to make decisions - Maximize patients’ opportunities, and their ability, to make decisions for themselves - Respect patients’ decisions. 6. Make an assessment of the patient’s condition taking into account the patient’s medical history, views, experience, knowledge, clinical judgment and the patient’s views and understanding of their condition, to identify which investigations or treatments are likely to result in an overall benefit for the patient. 7. Explain the options to the patient, setting out the potential benefits, risks, burdens and side effects of each option, including the option to have no treatment. 8. Not to put pressure on the patient to accept a particular option which they believe to be best for the patient. The patient has the right to accept or refuse an option for a reason that may seem irrational or for no reason at all. 9. If the patient asks for a treatment that the doctor considers would not be of overall benefit to them then, discuss the issues with the patient and explore the reasons for their request. If, after discussion, the doctor still considers that the treatment would not be of overall benefit to the patient, they do not have to provide the treatment. But they should explain their reasons to the patient, and explain any other options that are available, including the option to seek a second opinion. 10. If patients are not able to make decisions for them, the doctor must work with those close to the patient and with other members of the health care team. 11. Check whether patients have understood the information given, and whether or not they would like more information before making a decision. 12. Make it clear that they can change their mind about a decision at any time. You must answer patients’ questions honestly and, as far as practical, answer as fully as they wish. 13. No one else can make a decision on behalf of an adult who has the capacity. If a patient asks you to make the decisions on their behalf or wants to leave decisions to a relative, partner, friend, caretaker or other person close to them, you should explain that it is still important that they understand the options open to them, and what the treatment will involve. If they do not want this information, you should try to find out why. 14. If a patient insists that they do not want even this basic information, you must explain the potential consequences of them not having it, particularly if it might mean that their consent is not valid. You must record the fact that the patient has declined this information. You must also make it clear that they can change their mind and have more information at any time. 15. Not to withhold information necessary for making decisions for any other reason, including when a relative, partner, friend or caretaker asks you to, unless you believe that giving it would cause the patient serious harm. In this context ‘serious harm’ means more than that the patient might become upset or decide to refuse treatment. 16. If you withhold information from the patient you must record your reason for doing so in the patient’s medical records, and you must be prepared to explain and justify your decision. You should regularly review your decision, and consider whether you could give information to the patient later, without causing them serious harm. 17. Discuss a patient’s diagnosis, prognosis and treatment options in the following way: - Share information in a way that the patient can understand and, whenever possible, in a place and at a time when they are best able to understand and retain it. - Give information that the patient may find distressing in a considerate way. - Involve other members of the health care team in discussions with the patient. - Give the patient time to reflect, before and after they make a decision, especially if the information is complex or what you are proposing involves significant risks. - Make sure the patient knows if there is a time limit on making their decision, and who they can contact in the healthcare team if they have any questions or concerns. 18. Support your discussions with patients by using written material, or visual or other aids. If you do, you must make sure the material is accurate and up to date. 19. It is your responsibility to discuss it with the patient. If this is not practical, you can delegate the responsibility to someone else, provided you make sure that the person you delegate to: - Is suitably trained and qualified - Has sufficient knowledge of the proposed investigation or treatment, and understands the risks involved - Understands, and agrees to act in accordance with, the guidance in this booklet. If you delegate, you are still responsible for making sure that the patient has been given enough time and information to make an informed decision, and has given their consent, before you start any investigation or treatment. 20. Keep up to date with developments in your area of practice, which may affect your knowledge and understanding of the risks associated with the investigations or treatments that you provide. Clear, accurate information about the risks of any proposed investigation or treatment, presented in a way patients can understand, can help them make informed decisions. 21. Discuss with patients the possibility of additional problems coming to light during an investigation or treatment when they might not be in a position to make a decision about how to proceed. If there is a significant risk of a particular problem arising, you should ask in advance what the patient would like you to do if it does arise. You should also ask if there are any procedures they object to, or which they would like more time to think about. 22. Ensuring that decisions are voluntary, particularly in vulnerable subjects. 23. Respect a patient’s decision to refuse an investigation or treatment, even if you think their decision is wrong or irrational. 24. Before accepting a patient’s consent, you must consider whether they have been given the information they want or need, and how well they understand the details and implications of what is proposed. This is more important than how their consent is expressed or recorded. 25. If it is not possible to get written consent, for example, in an emergency or if the patient needs the treatment to relieve serious pain or distress, you can rely on oral consent. But you must still give the patient the information they want or need to make a decision. You must record the fact that they have given consent, in their medical records. 26. Use the patient's medical records or a consent form to record the key elements of your discussion with the patient. This should include the information you discussed, any specific requests by the patient, any written, visual or audio information given to the patient, and details of any decisions that were made. 27. Before beginning treatment, you or a member of the health care team should check that the patient still wants to go ahead; and you must respond to any new or repeated concerns or questions they raise. This is particularly important if: - Significant time has passed since the initial decision was made - There have been material changes in the patient's condition, - In any aspect of the proposed investigation or treatment new information has become available, for example about the risks of treatment or about other treatment options. 28. Make sure that patients are kept informed about the progress of their treatment, and are able to make decisions at all stages, not just in the initial stage. If the treatment is ongoing, you should make sure that there are clear arrangements in place to review decisions and, if necessary, to make new ones. 29. You must record the discussion and any decisions the patient makes. You should make sure that a record of the plan is made available to the patient and others involved in their care, so that everyone is clear about what has been agreed. This is particularly important if the patient has made an advance decision to refuse treatment. You should bear in mind that care plans need to be reviewed and updated as the situation or the patient's views change. 30. There is no standard format for taking consent for all the situations. The formats can be modified according to the need and preferably translated in the local language so that the patient can understand the nature of the consent clearly. This will also avoid complications in a suit filed by the patient with respect to consent. Trust, openness and good communication will ensure a good relationship between doctor and patient. Doctor must respect human life, so that the patient will trust them. To justify that trust Doctor must meet the standards expected from them in following domains. - Knowledge, skills and performance - Safety and quality - Communication, partnership and teamwork - Maintaining trust A consent form is a legal document. It must contain the name and the signature of the patient, two witnesses, and the doctor along with his registration number. There is no standard format for taking consent for all the situations. The formats can be modified according to the need and preferably translated in the local language so that the patient can understand the nature of the consent clearly. This will also avoid complications in a suit filed by the patient with respect to consent.\(^7\) The level of disclosure has to be case-specific. There cannot be anything called a standard consent form. No doctor can sit in comfort with the belief that the "consent" can certainly avoid legal liability. "One cannot know with certainty whether consent is valid until a lawsuit has been filed and resolved". This has been highlighted by the note of The California Supreme Court.\(^8\) Viscera, tissue and body fluids: Preservation and Forwarding Procedures Rajendra N kagne, Ananda Reddy Preservation of viscera Indications Viscera of the victim have to be preserved in the following situations: 1. If death by poisoning is suspected by the police or by the doctor 2. Deceased was intoxicated or used to drugs 3. Cause of death was not found after autopsy 4. Death due to accident, suicide or homicide where suspicion of the use of intoxicants, sedatives or poisonous substance is raised 5. Advanced decomposition 6. Accidental death involving driver or machine operator 7. All brought dead cases to the casualty Containers - Clean, white and wide-mouthed glass bottles of one liter capacity should be used. - Do not use rubber stopper, because it may extract poisons, such as chloroform and phenols. - Blood should be collected in screw-capped bottle of about 150 ml capacity. Preservation and dispatch of viscera 1. The stomach and small intestine with their contents are preserved in one bottle, and the liver and kidney in another bottle. The blood and urine are preserved separately. 2. The stomach and intestines are cut open before they are preserved. The liver and kidney are cut into multiple small pieces for uniform preservation. 3. Only two-thirds of the capacity of the bottle should be filled with the viscera and preservative to avoid bursting of the bottle due to decomposition gas formation. 4. The bottle should be covered with a piece of cloth, and tied by a string and the ends should be sealed. 5. The bottles should be properly labeled. 6. Sample of preservative used should be preserved in a separate bottle. 7. The sealed bottles are put into a box which is locked and the lock is sealed. 8. A viscera forwarding letter to be sent to The Regional Forensic Science Laboratory. 9. The Key of the box and viscera forwarding letter (form) with a sample of the seal, is kept in an envelope, which is sealed and sent with viscera box. 10. The viscera box is handed over to the police after taking his signature. PRINCIPLES OF COUNSELING Sumitha Nayak, Ranjan Kumar Pejaver Definition The term Counseling denotes a wide variety of procedures for helping individuals achieve adjustment, such as giving advice, therapeutic discussions and the administration and interpretation of tests and vocational assistance. Counseling is termed as a helping relationship that helps the individual to become self-sufficient, self-dependent and self-directed and to adjust themselves efficiently to the demands of a better and meaningful life. According to Carl Rogers, an effective counseling consists of a definitely structured permissive relationship that allows the client to gain an understanding of himself to a degree that enables him to take positive steps in the light of his new orientation. This simply means that, the process of counseling is one where in the counselor provides accurate and up to date information to the client (counselee) regarding the situation, and helps the counselee to take an informed decision which will allow him to lead a better and more meaningful life. What does counseling involve? Counseling is a special area for providing services, as it involves clients who may or may not be directly in medical settings. Counseling is a set of activities, wherein the counselor uses his skill and expertise when working with the clients. These may involve different methods and activities like rational-emotive, psychoanalytic or behavioral, and is a continuous interaction between two or more persons. The counselor provides the facilities to help the counselee make a suitable choice, that in turn will help achieve the desired change or help to arrive at a suitable choice. The counselor assists the counselee to make an interpretation of the facts relating to a choice, plan or adjustments which he needs to make. In effect, it is a process that takes place in a one-to-one social environment, in which the counselor who is professionally competent in relevant psychological skills and knowledge, seeks to assist the client in bringing about a voluntary change in the client. Medical counseling involves a combination of medical, holistic and mind-body techniques that reduce stress and anxiety and promote healing. Characteristics of a Counselor A counselor is a therapist who serves as a model for the client. A counselor plays many roles while working with a counselee. He may be an educator, a source of support, an agent of change, a preventive counselor or a resource consultant. To be effective as a counselor, it is essential to possess certain qualities. The relationship of the counselor with the counselee should be honest and dynamic, and the counselor must be a humane person, who is capable of empathizing with the client or counselee. To be effective, the counselor must possess the following qualities: Authentic, sincere and honest - the counselor must be totally honest with the counselee, as regards the choices that are available, the possible outcomes, the long term prognosis and the approximate cost of therapy. It is of utmost importance to gain the confidence of the counselee and this is impossible by being ambiguous, or hiding behind a façade of honesty. Listen attentively and express thoughts and ideas clearly. Maintain confidentiality. This is an essential and unique feature of the relationship of the counselor with the counselee, and allows the client to completely confide all the personal secrets. This is essential for the counselor to understand and evaluate the client’s problems and offer adequate solutions. Understand that you are dealing with emotional individuals who are under stress, and hence the counselor needs to be compassionate and empathetic towards the counselee. Have a sense of humor- the counselor must have the capacity to accept his mistakes and to laugh at his own contradictions. Make choices that are life oriented- the counselor must be committed to live life fully, and to offer choices that are life supportive rather than merely existential. Have a sincere interest in the welfare of others- based on respect, trust, care and value of human life. Be deeply involved in their work and derive meaning and satisfaction from it Appreciate the influence of culture and respect the diversity of values espoused by other cultures. They must be sensitive to the differences in response arising out of social class, gender and race. Maintain healthy boundaries with the counselee, despite being fully involved and empathetic with the client or counselee. The counselor must not carry around the problems of their counselees during their leisure hours, and this is essential to maintain balance in their lives. **Need for counseling** The need for counseling could occur under varied circumstances in order to counter various problems, worries, misgivings or issues that a client may face. These may include: **Fear of the unknown:** Most often in medical settings, the patient maybe seriously or terminally ill or maybe in an emergency situation that requires immediate surgery, specialized diagnostic techniques, or expensive medications that need to be administered urgently. The patient and attendants are fearful, and may be unaware of the gravity of the situation and hence need to be adequately counseled to understand the seriousness and possible prognosis of the condition. The patients themselves need counseling regarding the severity of their condition, available treatment options and possible outcome and prognosis. In chronic conditions like drug abuse, HIV/AIDS, malignancy or emergency situations, the patients are fearful as to what the future holds for them, despite taking adequate treatment. **Fear of death, dying and grief:** the patient who is seriously ill with a life threatening ailment, maybe fearful that he will not respond to treatment and would die soon. It is essential for a counselor to provide emotional support to the patient as well as the family members to face the difficult circumstances. Family members may have feelings of grief and phobias of loss of a loved one. All these need to be handled by the counselor in a very sensitive manner. Death of a family member especially offspring, spouse, parent etc can cause severe mental stress and grief, and the counselor needs to handle the delicate situation with immense care. **Denial:** is a common feeling amongst the patient as well as those close to him. Often the patient is unable to accept the fact that he suffers from a serious illness. Denial is a natural response and needs to be confronted only if it causes harm. Sometimes, denial can affect the counselors and they need to be constantly aware of this and be able to confront and overcome it. **Shame and guilt:** the patient suffers from guilt because he has neglected his health, or if he suffers from stigma bearing diseases like HIV/AIDS, alcohol or drug addiction, etc and failed to take the necessary precautions/ medication to avoid the grave situation. The family members also suffer from guilt at having someone close to them suffering from grave disease. **Powerlessness, helplessness and loss of control**-the patient feels helpless as he is totally under the control of the physician and he has no control over his own body and his life and decisions. The family too feels a sense of powerlessness, as they are unable to do anything to alleviate the suffering of their loved ones. **Frustration**- this sets in chronic illness cases, where they need to repeatedly visit the hospitals or meet the physician at regular intervals. Sometimes, the medications do not show the desired results, or the results appear very slowly. Under these conditions, the patient may develop extreme negativity which could lead to depression, low self esteem and feelings of extreme inadequacy. It could lead to self injurious behavior and suicidal tendencies, which need to be prevented by early recognition and adequate counseling. **Anger, rage and hostility**- patients often feel anger and extreme rage when diagnosed with severe or terminal illness. They develop feelings of hostility and inadequacy and may refuse to accept the available treatment options. It is essential to empathize with these patients and explain the benefits of accepting and continuing the recommended treatment options. **Essential features of a counseling meeting** The meeting between the counselor and counselee has a specific purpose and goal and would end as soon as is therapeutically possible. It is understood that while the counselor has more expertise and is responsible to make the meeting go well, it is the counselee who is more important. The interactions are structured to make efficient use of the time available. No time is lost in unnecessary talk. The relationship is one of interpersonal influence, in which the counselor seeks to promote change in the client through his skills and persuasive tactics. Both the counselor and the client must come to an agreement as to the causes and etiology of the presenting complaints and what must be done in order to make things better. The most effective relationship are characterized by agreement on goals, consensus on methods and open communication and collaborative partnership. The relationship is multidimensional, dynamic and changes over time. What is important in the early stages is less important during the working stages, when an inter-actional pattern is developed to accomplish therapeutic tasks. Likewise, when counseling is ending, the relationship could return to an egalitarian pattern, much different from the initial stage. **The counseling meeting** The meeting between the counselor and the counselee should take place at a predetermined time and place. The duration of the meeting may also be predetermined. Both the parties should be made aware that whatever is discussed would be absolutely confidential. Both parties could freely ask and clarify any doubts arising during the session. At the outset, the counselor must enquire about the language with which the counselee is familiar and comfortable and every attempt must be made to use this language during the entire meeting. During the meeting, the counselee is encouraged to speak out and clear whatever apprehensions, doubts and thoughts have been disturbing the mind. In acute medical conditions or in emergency situations, the counselor must give the client a complete and concise evaluation of the patient’s condition, its seriousness and the possible outcomes, along with the likely cost of the entire treatment. In chronic disabling diseases like HIV/AIDS or malignancies, the client must be motivated to accept and follow the physician’s recommendations in terms of medication usage. The adverse effects of the medications or radio-therapy, the likely outcomes of therapy and the subsequent quality of life must all be discussed. In long drawn treatments, it is essential to discuss the possible costs and economic burden to the family as well as the follow up treatment, medications, investigations and reviews that may be needed. Avoiding medical jargon and using language appropriate to each parent’s level of understanding is important. For example, early use of the term cerebral palsy to explain motor dysfunction of infancy or spastic paresis desensitizes parents to it and may open avenues of explanation regarding neurologic dysfunction and therapeutic intervention. At the end of every session, it is essential to document all that transpired during the meeting, including what was conveyed by the counselor, the queries raised by the client, the clarifications offered and the possible solutions available. In cases of medical counseling, it is also essential to document the final decision taken by the client as regards the therapy options, with specific mention of the financial expenses that could be incurred. It is pertinent to mention, that in cases of medical counseling, it may be mandatory to place the client's signature at the end of this documentation. **Counseling approaches** *Client-centered approach:* Client-centered approach is a relatively simple method, in which the client feels reassured that he is deeply understood and his feelings are accepted. The counselor adopts a stance of active listening, and communicates with the client using a posture of empathic understanding, intently attending to the client’s verbal and non verbal messages and interpreting the surface and underlying meanings. The counselor allows the client to reflect on his feelings. The client’s situation and his feelings are viewed objectively, and it provides him an opportunity for emotional catharsis, by releasing the pent up tensions and pressure. The client is encouraged to move from a more superficial plane to explore deeper concerns and significant problems. *Existential counseling:* The main goal in existential counseling is to make the client understand and find personal meaning in their actions, their lives and their suffering. The emphasis here is on helping to make the client more aware of himself. It is basically an attitude towards living, and emphasizes the understanding and insight into the human condition. This has been described as a more abstract and may be difficult to apply in everyday living as well as for those with severe cognitive and emotional disturbances. *Psychoanalytic counseling:* Psychoanalytic counseling is the traditional Freudian method, which deals with different layers of awareness. Through the sessions of counseling, it is possible to peel away all the outer layers and reveal the unconscious thoughts, which are the allowed to surface. Often, the client uses various defense mechanisms, like projection, sublimation and fixation, in order to avoid accepting the undesirable or the responsibility of the irrationality, thus resulting in pain, stress and undue suffering. The psychoanalytic method is time consuming and could take several years to complete the treatment, hence is not too useful in situations where a large number of people require the services of a counselor, or in acute emergency situations. *Gestalt counseling:* Gestalt counseling is a method that focuses on the "what" and "how" of behavior and on the central role of "unfinished business". The client is encouraged to experience the present, and this facilitates greater self-awareness and understanding. The technique involves making the client answer all questions with complete honesty and sincerity, thus converting the guilt into sentiments. By encouraging him to explore and express the internal resentment, the person can become unstuck and work through his unfinished business. However, this method has a high potential for abuse, as it encourages a "do your own thing" attitude, which may create a sense of irresponsibility. Alderian counseling: Alderian counseling is a remarkable theory that combines a pragmatic approach with some amount of psychoanalysis. The client is made to understand that reality is as we perceive it and not absolute, hence one need not be afraid of making mistakes. One must learn to do one’s best and accept the outcome, without any feelings of inadequacy. The counselor is free to choose any method that he is familiar and comfortable with. Sometimes, it may be essential to use a combination of the methods in order to achieve the desired results. The Counseling Process The counseling process consists of six components that proceed in a sequential manner * Diagnosing problems * Setting appropriate goals * Specifying objectives * Generating and deciding among alternatives * Preparing action plans * Implementing and evaluating plans. These steps follow a circular path rather than a linear path, as the sixth step may lead back to previous steps in case there is no satisfactory progress in the results. **Diagnosing problems:** This is the first step that must be undertaken before any counseling can begin. Through a series of questioning techniques, statements or communications, it is essential to gather as much information as possible regarding the current problem, the background of the problem and the financial situation of the client. As further rapport is built up, the client may reveal sensitive information that may not be available at the beginning of the counseling relationship. **Setting appropriate goals:** Once the problem at hand is understood, it is essential to set up goals. The goals are of two types: - Process goals - Outcome goals Process goals are set by the counselor to build a productive relationship that could support the resolution of the client’s problems and move towards the achievement of results. Outcome goals are the responsibility of the client, which includes setting targets for getting out of problems and moving towards achievement of results. These should be specific, achievable and measurable goals, which can encourage the client to achieve the set goals. **Specifying objectives:** It is not always possible for the client to achieve his goals, especially those that are long term. Hence it is essential to set interim objectives which can affect the client's behavior leading up to the final objective or goal. These objectives must also be precise, achievable and measurable and they act as milestones to show the client his progress towards its goals. **Generating and deciding among alternatives:** Once the objectives are set up, the next step is to decide on how to accomplish the goals. First of all, ask the client if they have considered any methods to achieve their goals. By allowing them to do this, the counselor is encouraging them to attempt to regain control of their lives. Once the client has done this, the counselor may offer alternative suggestions, ask pertinent questions and mention the consequences of each suggestion. Allow the client to decide the alternative that he would like to adopt, as this is the only method by which he will learn to make life decisions and evaluate the results of his decisions. **Preparing action plans:** By creating a written action plan, the client can be helped to understand the real cause of the problem and identify the remedial alternatives that are available to overcome the problem. The counselor can assist the client in planning action steps that will help to realize the selected remedial action. The counselor must urge the client to be committed to carry out the action plans and achieve the desired goals. **Implementing and evaluating plans:** Both positive and negative feedback come into play at this stage. The constructive feedback will help to reinforce the client's actions that have resulted in achieving the desired goals. By tactfully suggesting methods that can be followed to achieve the goals that have not been fulfilled, it is possible to minimize the negative feedback, and keep the client continuously motivated to follow the action plan to achieve all the desired goals. Counseling the patient and their care givers is an integral part of medical management. If provided in a scientific manner with compassion it will be effective. It is important that the counselor knows his client's personality, the situation of the client and the options that can be offered and explained to the client. **Further Reading** 1. Annamalai University Press. Principles of guidance and counseling. Pg 1-48 2. Roseman D. Medical counseling. Available at http://www.medicalcounseling.net/ 3. Batki SL, Selwyn PA. Substance abuse treatment for persons with HIV/AIDS. Treatment improvement protocol 37. Available at http://www.ncbi.nlm.nih.gov/books/NBK64923/pdf/TOC.pdf 4. Sherman MP. Follow up of the NICU patient. Available at http://emedicine.medscape.com/article/1833812-overview#a30. 5. British assn for counseling and psychotherapy. Ethical principles of counselling and psychotherapy. Available at http://www.bacp.co.uk/ethical_framework/ethics.php. --- **MEDICAL COUNSELING IN SPECIAL SITUATIONS** *Sumitha Nayak, Ranjan Kumar Pejaver* **Introduction** Counseling is an art as well as science. In medical sphere it is of utmost importance. Though the principles of counseling are the same across the board, some of specialties have unique situations which warrant more precise counseling techniques. Some of the salient ones are described below. **Counseling in Pediatrics** Pediatrics is the branch of medicine that deals with the medical care of infants, children and adolescents, with the age range from 0-18 years. As such, every medical practitioner who handles children, should possess some counseling skills, as the major challenge is that not only does the child have to be treated, but the parents and the family too needs considerable counseling. **Concepts in Pediatric counseling:** Children deal with similar stress factors as adults do, and their responses are varied due to their immaturity. Hence, children also may be faced with and need to deal with crime, addiction, disease, death and divorce in the family. Children also face conflicts at home with issues related to medical problems like bed wetting, asthmatic attacks, chronic debilitating illnesses like malignancies, congenital anomalies, dermatitis etc. Apart from this, issues related to academics, homework etc may produce immense stress at home. This may also result in conflicts with the teachers leading to behavioral changes. Peer pressure is another potent cause of conflict and disturbance both at home and at school. Older children and adolescents attempt to establish their own identity, which can lead to immense conflict, aggression and if unchecked can result in behavioral disturbances. **Behavioral disturbances** Children can manifest any of the following disturbances, which would point towards an underlying disturbance and point to a need for counseling. These include: - Aggressive or violent behavior - Using abusive language - Inability or difficulty in following rules - Lack of friends, inability to interact and maintain social contact - Extreme timidity or withdrawal from social interactions and activities - Over dependent behavior - Irritable, anxious, extremely fearful - Uncontrollable crying, hysteria - Sleep disturbances, nightmares The responses of each child is different not only from adults, but amongst themselves. This is related to the - age of the child at exposure - the intelligence and capacity to accept changed situations - the emotional stability of the child - the method of reacting to stressful situations. Under stress, the child develops various feelings and emotions which are related to the immaturity and lack of understanding of the situation. These include feelings of - Insecurity - Stress - Panic - Fear - Anxiety - Separation Children are disturbed when they witness others crying. They are also afraid of new situations and environment, like hospitals with large machines, unknown persons and equipments. Some children detest exposure to other child patients, and most children are afraid of injections. Counseling by the primary care givers can help to provide a safe home environment which could go a long way in preventing disease, injury and accidents in children. Written consent for counseling sessions, surgery or any other interventions in children is given by the parents until the child achieves adulthood. In case the parents are unavailable, the next of kin or guardian must sign the consent form. **Methods of pediatric counseling:** The conventional methods that are used in counseling can be used in pediatric counseling. These include the behavior and cognitive therapy, besides the psychoanalytic theory. Apart from this, play therapy is an extremely useful method for counseling children. Play therapy is a technique whereby the child's natural means of expression via play is used as a therapeutic method to assist in coping with emotional stress or trauma. Children are provided therapeutic toys to enable them to express what they are unable to convey via words. The child is allowed to choose play items that have been placed on the table. By playing with selected materials, and with guidance from the counselor, the child plays out his feelings and brings the hidden emotions to the surface. **Vulnerable infants** Infants, especially newborns who suffer from serious illness, prematurity, congenital malformations or have been placed in the NICU are a special group, as the parents and family require extremely delicate handling as well as counseling. The commonest query that exists on the parents’ mind is “will my baby survive?” and “will my baby be normal?” The parents experience extreme amounts of psychological trauma, fear of the unknown and feelings of guilt. Counseling the parents of the high risk infants is challenging, as the staff need to be extremely compassionate while delivering information. It is ideal to plan a session as soon after the infant is admitted to the NICU, as this is the time to ascertain the parent’s expectations regarding the outcome and follow up care of the infant. It is best to avoid medical jargon, use language appropriate to the parent’s level of understanding and explain in simple terms even complicated issues like cerebral palsy, neuro-motor disabilities and likely outcomes. While planning discharge and follow up, the counselor must stress the uncertainty of outcomes especially in ELBW infants, as well as the propensity of the later appearance of dysfunction must also be discussed. A stable and consistent home environment always improves the outcomes, while disruption of the family unit only serves to potentiate unfavorable outcomes in the infant. **Counseling in Obstetric patients** Prenatal genetic counseling plays an important role to identify families that maybe at risk of having a baby with a birth defect or a genetic syndrome. Genetic counseling may be offered prior to conception in order to discuss issues related to maternal age, family history concerns, history of miscarriage and other reasons. Patients may be referred for genetic counseling for a variety of reasons, which include: - First trimester screening - Abnormal maternal serum screening - Advanced maternal age - Abnormal ultrasound examination - Recurrent miscarriage - Family history of a birth defect or genetic disease- Obesity in pregnancy is associated with higher risks of complications. These women require preconception counseling during which it is mandatory to provide specific information regarding maternal and fetal risks. Obesity is associated with increased risks of premature delivery, ante partum still birth, as well as higher risk of congenital malformations including neural tube defects which are harder to detect by Ultrasonography. Intra partum complications include emergency cesareean delivery, challenges with anesthesia, intra operative respiratory events, excessive blood loss, wound infection, endometritis etc. These women must be counseled about all the complications and must be encouraged to undertake a weight reduction program, nutrition counseling and recommended exercise regimen. **Fetal loss and stillbirth** Perinatal loss is a unique bereavement for parents and families, as it is unlike that of mourning on the death of a loved one. Miscarriage, stillbirth, fetal anomalies and therapeutic abortions for genetic or congenital anomalies, creates a loss of self esteem in the parents, apart from the actual loss of the wished-for child. There is a sense of loss of confidence in the capacity to produce a healthy child. A sense of despair and confusion appears in the family that was anticipating a joyous event. This grief is usually a long term process, that extends much beyond the discharge from hospital. The process of counseling during the prenatal, intra partum and postpartum phase along with continuation of support after discharge, promotes healthy grieving and avert a pathological process. The surviving siblings also need to be cared for, as they may be vulnerable to the effects of unresolved or unacknowledged grief. The counselor needs to empathize with the family and assist them to give up some of the feelings that they have invested in a person who no longer exists. In case of medical termination of pregnancy (MTP) due to medical reasons like congenital anomalies, genetic defects or maternal causes like severe anemia, ABO incompatibility etc which threaten the existence of the mother or fetus, the parents must be counseled regarding the risks involved in continuing such pregnancy. Under such conditions, after all the risks and possible outcomes are explained, it is essential to take a written consent from the parents for performing the termination of pregnancy. This must be in a language that is known and easily understandable by both the parents. **Cesarean section delivery and Instrumental delivery** In cases where the possibility of a normal vaginal delivery has been ruled out, the obstetrician must explain in detail the reasons why this would not be possible. The risks involved to mother and fetus, in attempting the vaginal delivery is explained to the husband as well as the close family members. The procedure is explained, the outcome and the possible complications must be told to the husband. A written consent to perform the caesarean section or insertion of forceps, is taken by the obstetrician. At the same time, the anesthetist must, after explaining to the husband and the family members, take a written consent for the administration of anesthesia - both epidural and general anesthesia. The risks involved and the possibility of any side effects of the anaesthetic used must also be explained to the husband and family members. **Gynaecological surgery** Postmenopausal women may develop manifestations of pre-existing lesions that may warrant the need for hysterectomy. Sometimes this may need to be done even during the earlier years of life. This involves, permanent removal of internal organs, the woman must be counseled by her gynecologist regarding the implications of this procedure on her future life. There may occur the need to administer continuous hormone therapy post surgically. The indications for this and the possible side effects must be explained in detail. While the consent for surgery must be taken from the patient and the nearest kin, the procedure, risks involved and the post operative progression must also be completely explained to the patient and the family members. **Counseling in Critical Care patients** Patients undergoing critical care treatment in the ICU settings develop feelings of psychological distress, with increasing morbidity. These symptoms are related to the total dependency on others, fear and lack of awareness of the disease process and its outcome, as well as the worry regarding the financial capability to clear the hospital bills. **Need for counseling in ICU patients** For most ICU patients, the experience is synonymous with immobility and total dependence on high technology equipment. Most patients are overwhelmed by what they see around them and may be reluctant to even move, despite being able to do so. This would further complicate the underlying morbidity and worsen the patient’s dependence on others. Those who have been immobilized for long durations, have low endurance, generalized weakness, low tolerance to sitting etc, all of which needs to be reversed at the earliest. In the ICU family visits are restricted to a few minutes, with no privacy at all. The ICU is termed as "low stimulus environment" as it results in sensory deprivation, exposure to meaningless or unpatterned stimuli, social isolation and immobilization, all of which may lead to generalized disorientation of time and thought and occasionally result in delirium. In case the ICU patient’s condition deteriorates while on treatment, the family members must be immediately informed regarding the change in status. They must be counseled on the available treatment modalities and they should be allowed to decide whether aggressive methods must be adopted for treatment or not. In case they do not agree to the suggested therapy, it is essential to document it and take a signature of the responsible family member. In case there is any delay in instituting therapy, this also must be documented and the family members should be informed with a signed documentation of the counseling done. **ICU stress disorders** Not only during stay in the ICU, but also after discharged from there, several patients persistently have stress and psychological disorders. These include anxiety, depression, and post traumatic stress disorder (PTSD), with a frequency ranging between 40-60% of cases. The quality of life has been found to be the worst in those who had undergone mechanical ventilation or in those who had suffered from severe trauma or sepsis. The symptoms of PTSD vary based on the cause of ICU admission. In those who had sustained trauma, there may be re-experience of the trauma and avoidance of stimuli that may act as reminders of the pain. Nightmares, vivid details during waking hours and intrusive memories can recur. Hyperarousal with sleep disturbances, irritability and concentration difficulty may further aggravate the psychological disturbance. All these disorders may persist for a long while even after discharge from ICU. **Counseling outcomes** Intra ICU counseling of critical care patients can go a long way in reducing the morbidity and improving the patient outcome, in terms of recover and rehabilitation. Counseling to provide motivation to live and participate in life saving medical regimens will improve outcome. Interventions to restore activities of daily living along with encouragement to participate in these activities will restore a sense of daily routine and personal independence. Relaxation techniques together with individualized activity program using meaningful tasks will help to promote cognitive and motor recovery. Together with counseling of the patient, it is essential to counsel the family on a regular basis, with updates on the clinical evolution, the requirement for various investigations must be explained in detail along with the costs and its impact on treatment. The counselor must understand the family’s expectations for the ICU patient and empathize with their feelings of fear, guilt or anxiety. In cases where death is imminent, it is important to discuss the change in the treatment goal. The family is counseled regarding the futility of further aggressive treatment, and the goal of treatment can be changed to “comfort care” where the patient would be kept comfortable, with no aggressive or heroic interventions. **Counseling in surgical care** Patients are advised surgery for various reasons, which may be elective surgery or emergency surgery. Usually in case of emergency surgery, the patient and the family members do not have much time nor desire to contemplate, nor negate the advice given by the medical practitioner. However, in cases of elective surgery, the patient himself and his family members may want complete details and counseling regarding the need for the procure, details of the surgery to be performed, the possible risks and likely outcomes along with post operative care and return to normalcy. The patient and the family usually have a fear of the unknown—the reason and severity of the disease process, the surgical procedure and possible risks. There is also a fear of death in case of surgical mishaps or complications. **Need for counseling** Surgical procedures are considered as something that can have a life changing impact on the patient. It is essential for the treating doctor to spend time in explaining all the facts related to the surgical procedure in a language that is easily understood by the patient and the family members. It is essential for the surgeon to don the role of a counselor. He must explain the need for the surgery as well as the surgical process in terms of what would be done during the procedure and the possible complications. A brief must be given regarding the immediate postoperative period, and should be followed up by the lifestyle changes, diet modifications or medications that would be needed and the duration of all this must also be explained clearly. If possible, the anesthesiologist should also be a part of the counseling team. In case the patient has any co-morbid medical problems, for which he is referred to a physician, it is necessary for the physician to explain the surgical risks that may occur due to the associated morbidity. While the counseling is done in the patient’s own language, it is essential to document the same and affix the patient’s signature to ensure that he has well understood all that has been conveyed during the session. **Further Reading** 1. Thomas SG. Pediatric counseling. Available at http://www.slideshare.net/StephinGeorgeThomas/pediatric-counseling 2. Claudius IA. The utility of safe counseling in a pediatric emergency department. Pediatrics 2005;115: e423-e427. 3. Sherman MP. Followup of the NICU patient. Available at http://emedicine.medscape.com/article/1833812-overview#a30 4. Counseling corner Inc. Play therapy. Available at http://www.counselingcorner.net/play/ 5. University of Rochester Medical center. About genetic counseling. Available at http://www.urmc.rochester.edu/ob-gyn/maternal-fetal-medicine/genetics/aboutgeneticcounseling.aspx 6. Obesity in pregnancy. Committee Opinion No.549 American College of Obstetricians and Gynecologists Obstet Gynecol 2013;121; 213-7. 7. Weiss L, Frischer L, Richman J. Parental adjustment to intrapartum and delivery room loss. The role of a hospital-based support program. Clin Perinatol. 1989;16(4):1009. 8. Affleck AT, Lieberman S, Polon J, Rohrkemper K. Providing occupational therapy in an intensive care unit. Am J Occup Ther. 1986;40(5):323-32. 9. Lier HØ, Biringer E, Stubhaug B, Tangen T. The impact of preoperative counseling on postoperative treatment adherence in bariatric surgery patients: A randomized controlled trial. Patient Education and Counseling 87 (2012) 336-342. Introduction The registered medical practitioners play an important role in health care delivery system as they provide 60% of health care. They are often compelled to focus on providing curative services. But as ‘social physicians’ it is important for them to pay attention to preventive and promotive care and national health programmes and address these areas largely benefitting the poor and needy. Medical practitioners play an important role in implementation of National programmes as they provide about 60% of the health care. So it is pertinent for them to know certain guidelines of the national health programmes to prevent and control communicable and non-communicable diseases which are prevailing in the community. The diagnosis, treatment and follow up of these diseases are mainly based on epidemiological studies and clinical trials. Hence it is important for the medical practitioner to follow diagnosis and treatment guidelines given under each National programme. List of National Health Programmes 1. National Vector Borne Disease Control Programme a. National Anti-malaria Control Programme b. Elimination of Lymphatic Filariasis c. Kala-Azar d. Japanese Encephalitis e. Dengue Fever/ Dengue Haemorrhagic Fever f. Chikungunya Fever 2. National Leprosy Eradication Programme 3. Revised National Tuberculosis Control Programme 4. National Aids Control Programme 5. National Programme For Control Of Blindness 6. Universal Immunization Programme 7. National Rural Health Mission 8. Reproductive And Child Health Programme 9. National Programme For Prevention And Control Of Cancer, Diabetes, Cardiovascular Diseases And Stroke 10. Integrated Disease Surveillance Project 11. National Mental Health Programme 12. National Guineaworm Eradication Programme 13. Yaws Eradication Programme 14. National Programme For Control And Treatment Of Occupational Diseases 15. Nutritional Programme a. Vitamin A Prophylaxis Programme b. Prophylaxis Against Nutritional Anaemia c. Iodine Deficiency Disorders Programme d. Special Nutrition Programme e. Balwadi Nutrition Programme f. ICDS Programme g. Mid-Day Meal Programme 16. National Family Welfare Programme 17. National Water Supply And Sanitation Programme 18. Minimum Needs Programme 19. 20 Point Programme NATIONAL LEPROSY ERADICATION PROGRAMME Leprosy is widely prevalent in India. A total of 0.92 lakh cases are on record as on 1st April 2013, giving a Prevalence rate (PR) of 0.73 per 10,000 populations. Leprosy has to be considered in differential diagnosis in any patient with hypo pigmented, anaesthetic, anhydric, or alopecic patches. Leprosy (Hansen’s disease) is a chronic infectious disease caused by M.leprae. It mainly affects nerves. It also affects skin, muscles, eyes, bones, testes and internal organs. It is clinically characterized by one or more of the following features. a. Hypo or hyper pigmented patches. b. Partial or total loss of cutaneous sensation in the affected area - heat, cold, pain and light touch (the earliest sensation to be affected is light touch). c. Thickening of peripheral nerves, as demonstrated by definite thickening with weakness of the corresponding muscles of the hands, feet or eyes leading to disabilities or deformities. d. Demonstration of lepra bacilli in the skin lesions and nasal smear by skin slit scrap smear. (Fig 1) In the field for the purpose of treatment leprosy is classified based on number of patches as: i. Pauci Bacillary (PB) - 1 to 5 skin patches ii. Multi Bacillary (MB) - more than 5 skin patches and nerve endings 2. For children aged 10 - 14 years i. Multibacillary leprosy: Duration of treatment is 12 months. | Rifampicin 450mg | Once Monthly | Under Supervision | |------------------|--------------|-------------------| | Dapsone 50mg | Daily | Self administered | | Clofazimine 50mg | Every other day | Self administered | | Clofazimine 150mg | Once Monthly | Under Supervision | ii. Paucibacillary leprosy: Duration of treatment is 6 months. | Rifampicin 450mg | once monthly | Under Supervision | |------------------|--------------|-------------------| | Dapsone 50mg | Daily | Self administered | 3. For children below 10 years should receive appropriately reduced dosage of above drugs **Advice to patient** - Leprosy is curable provided the drugs are consumed regularly, adequately and uninterruptedly. Most of the deformities are due to negligence which could have been prevented if patient is diagnosed early and takes the treatment regularly as per the schedule. - Repeated examination of contacts - Self care regarding prevention of disabilities. - Ensure provision of MCR/protective footwear for needy persons - Social support **Other activities of medical practitioners include the following:** - Treat cases with ulcers and refer complicated ulcer - Diagnose leprosy reactions type 1 & 2, neuritis and quite nerve paralysis. Treat them with - Prednisolone regime or refer them if not manageable Screen and refer willing cases for reconstructive surgery Ensure timely RFT after completion of MDT Ensure provision of MCR/protective footwear for needy persons Ensure follow-up of cases referred back from referral centre Ensure adequate self-care training is given to all patients with grade 1 & 2 disabilities. Eradication of leprosy is not possible only by treatment of individual cases in the clinic or nursing homes. It is possible by good knowledge of epidemiology of leprosy, mode of transmission, early diagnosis, education, importance of complete treatment under supervision etc. The family physicians should coordinate with the governmental agencies like Dist Leprosy officer for elimination of leprosy. NATIONAL PROGRAMME FOR CONTROL OF BLINDNESS A wide range of eye conditions (acute conjunctivitis, opthalmia neonatorum, trachoma, superficial foreign bodies, and xerophthalmia) can be treated/prevented at the grass root level by locally trained primary health workers who are the first to make contact with the community. They are provided with essential drugs such as topical tetracycline, vitamin A capsules to manage the diseases. Vitamin A concentration solution and iron and folic acid can be procured from nearby PHC or maternity homes and dispensed to children and pregnant women. Utilization certificates and beneficiary details are to be submitted to the health officers. One of the important causes for blindness is cataract. Family physicians during their physical examination of the patients they can detect the cataract early and refer to suitable eye hospital (based on their socio-economic condition) for treatment so that the patients will be treated at an early age. Similarly they can advice suitably regarding care of the eyes, food rich in Vitamin A etc. REVISED NATIONAL TUBERCULOSIS CONTROL PROGRAMME (RNTCP) Though India is the second-most populous country in the world, India has more new tuberculosis (TB) cases annually than any other country. A pulmonary TB suspect is defined as: - An individual having cough of 2 weeks or more - Contacts of smear-positive TB patients having cough of any duration - Suspected/confirmed extra-pulmonary TB having cough of any duration - HIV positive patient having cough of any duration Persons having cough of 2 weeks or more with or without other symptoms are referred to as pulmonary TB suspect. They should have 2 sputum samples examined for AFB. Sputum smear microscopy is the primary tool for diagnosing TB as it is more specific and has less inter and intra-reader variability than X-ray chest. A patient with extra-pulmonary TB may have general symptoms like weight loss, fever with evening rise and night sweats. Other symptoms depend on the organ affected. Examples of these symptoms are, swelling of a lymph node in TB lymphadenitis, pain and swelling of a joint in TB arthritis, neck stiffness and disorientation in a case of TB meningitis. Patients with EP TB who also have cough of any duration, should have sputum samples examined. If the smear result is positive, the patient is classified as pulmonary TB and his/her treatment regimen will be that of a case of smear-positive pulmonary TB. Diagnostic algorithm for pediatric pulmonary tuberculosis Pulmonary TB Suspect - Fever and/or cough (2 weeks) - Loss of weight/No weight gain - History of contact with suspected or diagnosed case of active TB Is expectoration present? - If yes, examine 2 sputum smears - 1 or 2 Positives - 2 Negatives - Antibiotics (10-14 days) - Cough persists - Repeat 2 sputum examinations - 1 or 2 positives - 2 Negatives - Antibiotics (10-14 days) - Cough persists - Refer to pediatrician If no, refer to pediatrician Diagnostic Algorithm for Pulmonary TB Cough for 2 weeks or more - 2 sputum smears - 1 or 2 positives - 2 Negatives - Antibiotics (10-14 days) - Cough persists - Repeat 2 sputum examinations - 1 or 2 positives - 2 Negatives - Antibiotics (10-14 days) - Cough persists - Refer to pulmonologist | Category of treatment | Type of | Regimen | Pre-treatment | Test at | If result | then | |-----------------------|---------|---------|---------------|---------|-----------|------| | New cases | New sputum smear positive | 2(HRZE)3 + | + | 2 | – | Start continuation phase, test sputum again at 4 and Continue intensive phase for one more month. Complete the treatment in 7 | | | New sputum smear negative | | | | | | | | New extra-pulmonary | | | | | | | | New | | | | | | | Red | Previously treated | Sputum smear-positive relapse | 2(HRZES)3 + 1(HRZE)3 + | + | 3 | – | Start continuation phase, test sputum again at 5 months, 6 months, complete- Continue intensive phase for one more month, test sputum again at 4 months if the sputum is positive send sputum for culture and drug sensitivity as it might be a case | | Category II Blue | Sputum smear-positive failure | | | | | | | | Sputum smear-positive treatment | | | | | | | | After default | | | | | | E: Ethambutol, H: Isoniazid, MDR: Multi-drug resistant, R: Rifampicin, Z: Pyrazinamide The number before the letters refers to the number of months of treatment. The subscript after the letters refers to the number of doses per week. The dosage strengths are as follows: Isoniazid (H) 600 mg, Rifampicin (R) 450 mg, Pyrazinamide (Z) 1500 mg, Ethambutol (E) 1200 mg, Streptomycin (S) 750 mg. - Patients who weigh 60 kg or more receive additional rifampicin 150 mg. - Patients who are more than 50 years old receive streptomycin 500 mg. - Patients, who weigh less than 30 kg, receive drugs as per Paediatric weight band boxes according to body weight. 2. In rare and exceptional cases, patients who are sputum smear-negative or who have extra-pulmonary disease can have recurrence or non-response. This diagnosis in all such cases should always be made by a Medical Officer(MO) and should be supported by culture or histological evidence of current, active TB. In these cases, the patient should be typed as ‘Others’ and given treatment regimen for previously treated. The following are the daily doses (mg per kg of body weight per day): Rifampicin 10-12 mg/kg (max 600 mg/day), Isoniazid 10 mg/kg (max 300 mg/day), Ethambutol 20-25mg/kg (max 1500 mg/day), PZA 30-35mg/kg (max 2000 mg/day) and Streptomycin 15 mg/kg (max 1gm/day). **Follow up of Paediatric TB cases** For the monitoring of treatment, follow-up sputum examinations are to be performed with the same frequency in children as in adults. Clinical or symptomatic improvement is to be assessed at the end of the intensive phase and at the end of treatment. Improvement should be judged by absence of fever or cough, weight gain, etc. Radiological improvement is to be assessed by a chest X-ray examination in all smearnegative pulmonary TB cases at the end of treatment (flowchart below). Radiological changes may persist and may not correlate with clinical improvement and hence should not cause concern. Medical practitioners and their assistants can become a DOTS provider. The facilities for diagnosis and the anti tubercular drugs are available at free of cost at all primary health centres, TB units, DOTS (Directly observed treatment short course) centres **Advice to the patient** 1. Patient should be advised to take the drugs regularly adequately and uninterruptedly as per the schedule. 2. The sputum should be collected in a container having a lid containing disinfectant like phenol and should be disposed safely to prevent the spread of the infection. 3. Patient should be instructed to close his mouth and nose while coughing, sneezing and talking to prevent the spread of infection to others. 4. He should keep the children away from him. 5. Ensure regular follow ups by the patient. 6. Ensure BCG vaccination administered to all children in the family. Please note. Total number of Tuberculosis cases in India is estimated to be 4 per 1000 population. In Bangalore city with an approximate population of 0.8 million, the estimated number of Tuberculosis cases will be about 32,000. These cases should be diagnosed as Tuberculosis by sputum examination and should be treated as per RNTCP guidelines. ### NATIONAL AIDS CONTROL PROGRAMME | State/Union Territory | Antenatal clinic HIV prevalence 2010-11 (%) | STD clinic HIV prevalence 2007 (most recent data) (%) | IDU HIV prevalence (%) | HIV prevalence (%) | Female sex worker HIV prevalence 2010-11 (%) | |-----------------------|---------------------------------------------|------------------------------------------------------|------------------------|--------------------|-----------------------------------------------| | India | 0.40 | 3.6 | 2.67 | 7.14 | 4.43 | | Karnataka | 0.69 | 8.40 | 0.00 | 5.36 | 5.10 | ### WHO case definition for AIDS surveillance: For the purposes of AIDS surveillance an adult or adolescent (>12 years of age) is considered to have AIDS if at least 2 of the following major signs are present in combination with at least 1 of the minor signs listed below, and if these signs are not known to be due to a condition unrelated to HIV infection **Major signs** - weight loss > 10% of body weight - chronic diarrhoea for more than 1 month - Prolonged fever for more than 1 month (intermittent or constant). **Minor signs** - persistent cough for more than 1 month - generalized pruritic dermatitis - history of herpes zoster - oropharyngeal candidiasis - chronic progressive or disseminated herpes simplex infection - Generalized lymphadenopathy. The presence of either generalized Kaposi sarcoma or cryptococcal meningitis is sufficient for the diagnosis of AIDS for surveillance purposes. For patients with tuberculosis, persistent cough for more than 1 month should not be considered as a minor sign. Diagnosis is done by Elisa test using 2 different kits and confirmation by western blot. As it requires counseling and treatment, hence these cases are referred to Integrated Counseling and Treatment Centres, Anti-Retroviral Therapy centres where counselling and drugs are available at free of cost. NACO ART CENTRES Bowring & Lady Curzon Hospitals, Bangalore Mysore Medical College, Mysore K I M S Hubli VIMS, Bellary District hospital, Davangere District hospital, Mangalore District hospital, Gulbarga District hospital, Belgaum District hospital, Bijapur District hospital, Kolar District hospital, Raichur Please note- There is increase in the prevalence of HIV/AIDS. Hence there is need for more information about HIV/AIDS among family physicians. The cases shall be diagnosed with the help of major and minor signs and shall be confirmed by lab tests. Family physicians should know where to refer the HIV positive cases for free laboratory diagnosis and treatment and necessary counseling facilities. (Facilities are available in all Government hospitals). | Vaccine | Age | Dose | Route | Site | |---------|-----------|------|-----------|-----------------------| | DPT | 16-24 months | 0.5 ml | Intra- | Antero-lateral side of | | OPV Booster | 16-24 months | 2 drops | Oral | Oral | | Japanese Encephalitis | 16-24 months with DPT/OPV | 0.5 ml | Sub-cutaneous | Left Upper Arm | | Vitamin A*** (2nd to 9th dose) | 16 months with DPT/OPV booster Then, one dose every 6 months up to the age of 5 years. | 2 ml (2 lakh IU) | Oral | Oral | | DT Booster 0.5 ml. Intramuscular Upper Arm | 5-6 years | 0.5 ml | Intra-muscular | Antero-lateral side of mid-thigh | | TT | 10 years & 16 years | 0.5 ml | Intra-muscular | Upper Arm | BCG: Bacilli Calmette Guerin, D: Diphtheria, OPV: oral polio vaccine, P: Pertussis, T: Tetanus, TT: Tetanus toxoid *Give TT-2 or Booster doses before 36 weeks of pregnancy. However, give these even if more than 36 weeks have passed. Give TT to a woman in labour, if she has not previously received TT. ** SA 14-14-2 Vaccine, in select endemic districts after the campaign. *** The 2nd to 9th doses of Vitamin A can be administered to children 1-5 years old during biannual rounds, in collaboration with ICDS. Please note. Now Pentavalent vaccine is available to prevent five diseases such as Diphtheria, Pertussis, Tetanus, Haemophilus influenza and Hepatitis B. Newer immunization schedule is mentioned here with. For booster dose DPT is administered. Newer schedule is applicable only for infants coming for first dose at 6 weeks of age. Vaccines can be procured from nearby PHC or maternity homes and administered to children and pregnant woman. Utilization certificates are to be submitted to the health officers and also beneficiary details: ### NATIONAL PROGRAMME FOR PREVENTION AND CONTROL OF CANCER, DIABETES, CARDIOVASCULAR DISEASES AND STROKE | Age | Vaccines | |-------|-----------------------------------------------| | Birth | DPT-1, OPV, IPV | | 6 weeks | DPT-1, OPV, IPV | | 10 weeks | Medical practitioner’s help in diagnosis of non-communicable disease by screening. | | 14 weeks | DPT-1, OPV, IPV | | 9 months | MC people can be done by the medical officers by blood pressure recording, blood sugar, lipid profile assessment. Diagnosed cases of non-communicable disease can be referred to CHC or higher centres. Health education and health promotion for behavioural and lifestyle changes can be carried out by interpersonal communications, posters and banners etc at their clinic. | | 16-18 months | DPT-1, OPV, IPV | | 5-6 years | MC people can be done by the medical officers by blood pressure recording, blood sugar, lipid profile assessment. Diagnosed cases of non-communicable disease can be referred to CHC or higher centres. Health education and health promotion for behavioural and lifestyle changes can be carried out by interpersonal communications, posters and banners etc at their clinic. | Hypertension Classification of blood pressure for adults | Blood Pressure Classification | SBP Mm Hg | DBP Mm Hg | |-------------------------------|-----------|-----------| | Normal | < 120 | and < 80 | | Pre-hypertension | 120-139 | or 80-89 | | Stage 1 Hypertension | 140-159 | or 90-99 | | Stage 2 Hypertension | > 160 | or > 100 | SBP, systolic blood pressure; DBP, diastolic blood pressure Treatment Life-style modifications to manage hypertension*† | Modification | Recommendation | Approximate SBP Reduction (Range) | |-------------------------------|--------------------------------------------------------------------------------|----------------------------------| | Weight reduction | Maintain normal body weight (body mass weight loss index) | 5-20 mmHg/10 kg 18.5-24.9 kg/m2 | | Adopt DASH eating plan | Consume a diet rich in fruits, vegetables, and low fat dairy products with a reduced content of saturated and total fat. | 8-14 mm Hg | | Dietary sodium reduction | Reduce dietary sodium intake to no more than 100 mmol per day (2.4 g sodium or 6 g sodium chloride). | 2-8 mm Hg | | Physical activity | Engage in regular aerobic physical activity such as brisk walking(at least 30 min per day, most days of the week). | 4-9 mm Hg | | Moderation of alcohol consumption | Limit consumption to no more than 2 drinks (1 oz or 30 mL ethanol; e.g., 24 oz beer, 10 oz wine, or 3 oz 80-proof whiskey) per day in most men and not more than 1 drink per day in women and lighter weight persons. | 2-4 mm Hg | DASH, Dietary Approaches to Stop Hypertension. * For overall cardiovascular risk reduction, stop smoking. † The effects of implementing these modifications are dose and time dependent, and could be greater for some individuals. **Lifestyle Modifications** Not at Goal Blood Pressure (<140/90 mmHg) (<130/80 mmHg for patients with diabetes or chronic kidney disease) **Initial Drug Choices** Without Compelling Indications Stage 1 Hypertension (SBP ≥ 140 or DBP 90–99 mmHg) Thiazide-type diuretics for most. May consider ACEI, ARB, BB, CCB, or combination. Stage 2 Hypertension (SBP ≥ 160 or DBP ≥ 100 mmHg) Two-drug combination for most (usually thiazide-type diuretic and ACEI, or ARB, or BB, or CCB). With Compelling Indications Drug(s) for the compelling indications (See table 8) Other antihypertensive drugs (diuretics, ACEI, ARB, BB, CCB) as needed. **Not at Goal Blood Pressure** Optimize dosages or add additional drugs until goal blood pressure is achieved. Consider consultation with hypertension specialist. DBP, diastolic blood pressure; SBP, systolic blood pressure. Drug abbreviations: ACEI, angiotensin converting enzyme inhibitor; ARB, angiotensin receptor blocker; BB, beta-blocker; CCB, calcium channel blocker. **Diabetes mellitus** Current criteria for the diagnosis of Diabetes - A1C ≥ 6.5%. The test should be performed in a laboratory using a method that is NGSP certified and standardized to the Diabetes Control and Complications Trial (DCCT) assay; or Fasting plasma glucose (FPG) > 126mg/dL (7.0 mmol/L). Fasting is defined as no caloric intake for at least 8 h; or - 2-h plasma glucose > 200 mg/dL (11.1mmol/L) during an oral glucose tolerance test (OGTT). The test should be performed as described by the World Health Organization, using a glucose load containing the equivalent of 75 g anhydrous glucose dissolved in water; or - In a patient with classic symptoms of hyperglycemia or hyperglycemic crisis, a random plasma glucose > 200 mg/dL (11.1 mmol/L); Screening for and diagnosis of GDM Perform a 75-g OGTT, with plasma glucose measurement fasting and at 1 and 2 h, at 24-28 weeks of gestation in women not previously diagnosed with overt diabetes. The OGTT should be performed in the morning after an overnight fast of at least 8 h. The diagnosis of GDM is made when any of the following plasma glucose values are exceeded: - Fasting: > 92 mg/dL (5.1 mmol/L) - 1 h: > 180 mg/dL (10.0 mmol/L) - 2 h: > 153 mg/dL (8.5 mmol/L) **Treatment** - Insulin therapy for type 1 diabetes - Pharmacological therapy for hyperglycemia in type 2 diabetes - Metformin, if not contraindicated and if tolerated, is the preferred initial pharmacological agent for type 2 diabetes. - In newly diagnosed type 2 diabetic patients with markedly symptomatic and/or elevated blood glucose levels or A1C, consider insulin therapy, with or without additional agents, from the outset. - If non-insulin monotherapy at maximal tolerated dose does not achieve or maintain the A1C target over 3-6 months, add a second oral agent, a glucagon-like peptide-1 (GLP-1) receptor agonist, or insulin. - A patient-centered approach should be used to guide choice of pharmacological agents. Considerations include efficacy, cost, potential side effects, effects on weight, comorbidities, hypoglycemia risk, and patient preferences. - Due to the progressive nature of type 2 diabetes, insulin therapy is eventually indicated for many patients with type 2 diabetes. **Physical activity** - Adults with diabetes should be advised to perform at least 150 min/week of moderate-intensity aerobic physical activity (50-70% of maximum heart rate), spread over at least 3 days/week with no more than 2 consecutive days without exercise. - In the absence of contraindications, adults with type 2 diabetes should be encouraged to perform resistance training at least twice per week. **Cancer** **Warning Signs** - Unusual bleeding/discharge - Blood in urine or stools - Discharge from any parts of your body, for example nipples, penis, etc. - A sore which does not heal - don't seem to be getting better over time - are getting bigger - getting more painful - are starting to bleed - Change in bowel or bladder habits - Changes in the colour, consistency, size, or shape of stools. (diarrhoea, constipated) - Blood present in urine or stool - Lump in breast or other part of the body - Any lump found in the breast when doing a self-examination. - Any lump in the scrotum when doing a self-exam. - Other lumps found on the body. - Nagging cough - Change in voice/hoarseness - Cough that does not go away - Sputum with blood - Obvious change in moles (Use the ABCD RULE) - Asymmetry: Does the mole look the same in all parts or are there differences? - Border: Are the borders sharp or ragged? - Colour: What are the colours seen in the mole? - Diameter: Is the mole bigger than a pencil eraser (6mm)? - Difficulty in swallowing - Feeling of pressure in throat or chest which makes swallowing uncomfortable - Feeling full without food or with a small amount of food Medical practitioners can provide health education regarding the warning signs and also play an important role in early diagnosis by investigations like pap smear for cervical cancer, self examination for breast cancer, oral ca and tobacco related cancers. For treatment they should refer to cancer institutes. NATIONAL RURAL HEALTH MISSION The programmes to be integrated are existing programmes of health and family welfare including RCH II; national vector borne disease control programmes against malaria, filaria, kala-azar, dengue fever, Dengue haemorrhagic fever (DHF) and Japanese encephalitis; national leprosy eradication programme; revised national tuberculosis control programme; National programme for control of blindness; iodine deficiency disorder control programme, and integrated disease surveillance project. REPRODUCTIVE AND CHILD HEALTH PROGRAMME - All mothers should be registered as soon as the pregnancy is confirmed. - All pregnant mothers should be advised to come for minimum 3 antenatal checkups - 2 doses of Tetanus toxoid. - All pregnant mothers should be advised to take prophylactic dose of Iron and Folic Acid tablets (100 mg of elemental Iron & 500 mcg of Folic Acid) daily for 100 days. In case of mild to moderate anaemia, therapeutic dose in the form of two tablets daily (200 tablets) for 100 days has to be advised during the second and third trimester of pregnancy. - Advice the mother to take adequate balanced diet. - Early detection of High risk pregnancies and promptly referred to FRUs like maternity homes, district hospitals, community health centres and hospital attached to medical colleges to conduct institutional pregnancies. - The eligible couples should be advised to delay first pregnancy and to have proper birth spacing by using contraceptive measures. - Couple with completed family should be advised for permanent sterilization. Medical termination for unwanted pregnancies. For prevention and control of RTI/ STD, they should be advised for regular check up by the Gynaecologist and advice regarding the genital hygiene like the usage of sanitary pads and early diagnosis and treatment according to syndromic approach. Early diagnosis of dehydration and rehydration based on diarrhoeal diseases control programme guidelines. Should be involved in the acute respiratory infection control for which they should be trained to recognize and treat pneumonia based on respiratory rate and other guidelines. For prevention and control of vitamin A deficiency under the programme massive doses of vitamin A are given to all children under 5 years of age. The first dose (1 lakh units) is given at nine months of age along with measles vaccination. The second dose (2 lakh units) is given along with DPT/OPV booster doses. Subsequently (2 lakh units each) are given at six months intervals. All cases of severe malnutrition one additional dose of vitamin A should be given. Infants from age of 6 months to 5 years should be given iron supplementation in liquid formulation of 20 mg elemental iron and 100 microgram folic acid for 100 days in a year. For children from 6-10 years should be given 30 mg elemental iron and 250 microgram folic acid for 100 days in a year. Vitamin A can be procured from nearby PHC or maternity homes and administered to children. Utilization certificates are to be submitted to the health officers and also beneficiary details. INTEGRATED MANAGEMENT OF NEONATAL AND CHILDHOOD ILLNESS (IMNCI) RATIONALE FOR AN INTEGRATED EVIDENCE BASED SYNDROMIC APPROACH TO CASE MANAGEMENT: Many well-known prevention and treatment strategies have already proven effective for saving young lives. As each of these interventions has been successful, accumulating evidence suggests that an integrated approach is needed to manage sick children to achieve better outcomes. Child health programs need to move beyond single diseases to address the overall health and well-being of the child. Many children present with overlapping signs and symptoms of diseases, a single diagnosis can be difficult, and may not be feasible or appropriate. This is especially true for first-level health facilities where examinations involve few instruments, negligible laboratory tests, and no X-ray. During the mid-1990s, WHO in collaboration with UNICEF and many other agencies, institutions and individuals, responded to this challenge by developing a strategy known as the Integrated Management of Childhood Illness (IMCI) Key features of IMCI IMCI is an integrated approach to child health that focuses on the well-being of the whole child. IMCI aims to reduce death, illness and disability and to promote improved growth and development among children under 5 years of age. IMCI strategy promotes the accurate identification of childhood illnesses in outpatient settings. Ensures appropriate combined treatment of all illnesses. Strengthens the counseling of caretakers. Speeds up the referral of severely ill children. Promotes appropriate care seeking behaviors, improved nutrition & preventive care & correct implementation of prescribed care IMNCI- India Incorporation of neonatal care as it now constitutes two thirds of infant mortality. Inclusion of 0-7 days. Incorporating National guidelines on Malaria, Anemia, Vitamin A supplementation and Immunization schedule. Training begins with sick young infant up to 2 months. - Proportion of training time devoted to sick young infant and sick child is almost equal. Skill based. - Home visits for young infants: **Schedule** - All new-borns: 3 visits (within 24 hours of birth, day 3-4 and day 7-10) - New-borns with low birth weight: 3 more visits on day 14, 21 and 28. - Provision of home based new-born care to 1. promote exclusive breastfeeding 2. prevent hypothermia 3. improve illness recognition & timely care seeking **Components of IMNCI strategy** i. Improvements in the case-management skills of health staff through the provision of locally-adapted guidelines on Integrated Management of Neonatal and Childhood Illness and activities to promote their use. ii. Improvements in the overall health system required for effective management of neonatal and childhood illness. iii. Improvements in family and community health care practices **IMNCI components and intervention areas** | Improve health worker skills | Improve health systems | Improve family & community practices | |------------------------------|------------------------|-------------------------------------| | ➔ Case management standards & guidelines | ➔ District planning and management | ➔ Appropriate careseeking | | ➔ Training of facility-based public health care providers | ➔ Availability of IMCI drugs | ➔ Nutrition | | ➔ IMNCI roles for private providers | ➔ Quality improvement and supervision at health facilities | ➔ Home case management & adherence to recommended treatment | | ➔ Maintenance of competence among trained health workers | ➔ Referral pathways and services | ➔ Community involvement in health services planning & monitoring | | | ➔ Health information system | | **Principles of integrated care** - All sick young infants up to 2 months of age must be assessed for "possible bacterial infection / jaundice". Then they must be routinely assessed for the major symptom "diarrhea". - All sick children age 2 months up to 5 years must be examined for "general danger signs" which indicate the need for immediate referral or admission to a hospital. They must then be routinely assessed for major symptoms: cough or difficult breathing, diarrhea, fever and ear problems. - All sick young infants and children 2 months up to 5 years must also be routinely assessed for nutritional and immunization status, feeding problems, and other potential problems. - Only a limited number of carefully selected clinical signs are used, based on evidence of their sensitivity and specificity to detect disease. A combination of individual signs leads to a child’s classification(s) rather than diagnosis. - Classification indicate the severity of conditions - They call for specific actions based on whether the child needs urgent hospital referral or admission (Classified as and colour coded pink) - needs specific medical Rx or advice (Classified as and colour coded yellow) - can be managed at home (Classified as and colour coded green) Use of limited number of essential drugs, and encouragement of active participation of caretakers in the treatment of infants and children. **Elements of case management process:** - Assess - Child by checking for danger signs. - Classify - Child’s illness by color coded triage system. - Identify - Specific treatments. - Treatments- Instructions of oral drugs, feeding & fluids. - Counsel - Mother about breast feeding & about her own health as well as to follow further instructions on further child care. IMCI case management at first level health facility, referral level, and home: NATIONAL VECTOR BORNE DISEASE CONTROL PROGRAMME Vector borne diseases are the major public health problems globally including India. Vector borne diseases are complex in nature. Their presence depends on numerous biological, social, economical and ecological factors. National Anti-malaria Control Programme Symptoms of fever associated with chills and rigors should be examined for malarial parasite. 1. Peripheral blood smear 2. QBC method 3. Rapid diagnostic tests The most accepted is peripheral blood smear. Treatment Where microscopy result is available within 24 hours Strengths of IMNCI: - Evidence based management decisions - Feasible to incorporate into both pre-service training & in-service training - Hands-on clinical training for 50% of training time - Focus on communication & counseling skills - Locally adapted recommendations for infant and young child feeding. - Cost effective - Lowers the burden to hospitals - Model to improve health care Where microscopy result is available within 24 hours Primaquine is contraindicated in infants, pregnant women and individuals with G6PD deficiency. 14 day regimen of Primaquine should be given under supervision. Dose schedule for Treatment of uncomplicated P.falciparum cases: 1. Artemisinin based Combination Therapy (ACT-SP)* **Dosage Chart for Treatment of Vivax Malaria** Artesunate 4 mg/kg body weight daily for 3 days Plus | Age | Sulphadoxine (25 mg/kg body weight) | Primaquine (1.25 mg/kg body weight on first day) | |--------------|-------------------------------------|--------------------------------------------------| | | CQ (250 mg) + PQ (2.5 mg) | CQ (250 mg) + PQ (2.5 mg) | | Less than 1 yr | | | | 1-4 years | | | | 5-8 years | | | | 9-14 years | | | | 15 yrs or more* | | | | Pregnancy | | | Treatment of Vivax Malaria Diagnosis of vivax malaria may be made by the use of RDT (Bivalent) or microscopic examination of the blood smear. On confirmation following treatment is to be given: Drug schedule for treatment of P vivax malaria: 1. Chloroquine: 25 mg/kg body weight divided over three days i.e. - 10 mg/kg on day 1, - 10 mg/kg on day 2 and - 5 mg/kg on day 3. 2. Primaquine*: 0.25 mg/kg body weight daily for 14 days. P. vivax and P. malariae should be treated as P. falciparum. Treatment of severe malaria cases Severe malaria is an emergency and treatment should be given as per severity and associated complications which can be best decided by the treating physicians. Before admitting or referring patients, the attending doctor or health worker, whoever is able to do it, should do RDT. and take blood smear; give a parenteral dose of artemisinin derivative or quinine in suspected cerebral malaria cases and send case sheet, details of treatment history and blood slide with patient. Parenteral artemisinin derivatives or quinine should be used irrespective of chloroquine resistance status of the area with one of the following options: ### Chemotherapy of severe and complicated malaria | Initial parenteral treatment for at least 48 hours: CHOOSE ONE of following four options | Follow-up treatment, when patient can take oral medication following parenteral treatment | |---|---| | **Quinine:** 20mg quinine salt/kg body weight on admission (IV infusion or divided IM injection) followed by maintenance dose of 10 mg/kg 8 hourly; infusion rate should not exceed 5 mg/kg per hour. Loading dose of 20mg/kg should not be given, if the patient has already received quinine. | Quinine 10 mg/kg three times a day with: doxycycline 100 mg once a day or clindamycin in pregnant women and children under 8 years of age, - to complete 7 days of treatment. | | **Artesunate:** 2.4 mg/kg i.v. or i.m. given on admission (time=0), then at 12 h and 24 h, then once a day. or **Artemether:** 3.2 mg/kg bw i.m. given on admission then 1.6 mg/kg per day. or **Arteether:** 150 mg daily i.m for 3 days in adults only (not recommended for children). | Full oral course of Area-specific ACT: In NorthEastern states: Age-specific ACT-AL for 3 days + PQ Single dose on second day In other states: Treat with: ACT-SP for 3 days + PQ Single dose on second day | **Note:** The parenteral treatment in severe malaria cases should be given for minimum of 24 hours once started (irrespective of the patient’s ability to tolerate oral medication earlier than 24 hours). --- ### Elimination of Lymphatic Filariasis Diagnosis is by peripheral blood smear examination for microfilaria among those who are having painful swelling of lower limbs with fever. Medical officer can play an important role in mass drug administration (MDA) by advising Diethyl carbamazine (DEC) 100mg - 2-5 Years 1 tablet - 6-14 Years 2 tablets - 15 Years and above 3 tablets Plus 1 tablet albendazole 400 mg. Pregnancy, children less than 2 years and people with severe illness and exempted from the programme. He can treat the individual cases by giving Diethyl carbamazine (DEC) 6mg/kg for 12 days. All should be advised to take personal protective measures against mosquito bites like insecticide treated nets, mosquito repellents such diethyl or dimethyl-benzamide. ### Japenese Encephalitis How JE is Diagnosed? Clinical: Clinically JE cases present signs and symptoms similar to encephalitis of viral origin and cannot be distinguished for confirmation. However, JE can be suspected as the cause of encephalitis as a febrile illness of variable severity associated with neurological symptoms ranging from headache to meningitis or encephalitis. Symptoms can include headache, fever, meningeal signs, stupor, disorientation, coma, tremors, paralysis (generalized), hypertonia, loss of coordination. Laboratory: Several laboratory tests are available for JE virus detection which include **Antibody detection:** Heamagglutination Inhibition Test (HI), Compliment Fixation Test (CF), Enzyme Linked Immuno-Sorbant Assay (ELISA) for IgG (paired) and IgM (MAC) antibodies, etc. **Antigen Detection:** RPHA, IFA, Immunoperoxidase etc. Genome Detection - RT-PCR Isolation - Tissue culture, Infant mice, etc In view of the limitations associated with various tests, IgM ELISA is the method of choice provided samples are collected 3-5 days after the infection. The cases are managed symptomatically. Clinical management of JE is supportive and in the acute phase is directed at maintaining fluid and electrolyte balance and control of convulsions, if present. Maintenance of airway is crucial. - In 2006 Government of India has initiated JE vaccination as a component of Universal Immunization Programme. - Single dose live attenuated JE vaccine is given subcutaneously for children between 1 to 15 years of age. - 11 endemic districts of 4 states included (UP, Assam, West Bengal, Karnataka). - In Karnataka, Bellary, Kolar, Raichur, Koppal and Mandya, Dharwad and Bijapur districts are included. **Dengue Fever** **Signs & Symptoms Of Dengue Fever** - Abrupt onset of high fever - Severe frontal headache - Pain behind the eyes which worsens with eye movement - Muscle and joint pains - Loss of sense of taste and appetite - Measles-like rash over chest and upper limbs - Nausea and vomiting **Signs & Symptoms Of Dengue Haemorrhagic Fever And Shock Syndrome** - Symptoms similar to dengue fever - Severe continuous stomach pains - Skin becomes pale, cold or clammy - Bleeding from nose, mouth & gums and skin rashes - Frequent vomiting with or without blood - Sleepiness and restlessness - Patient feels thirsty and mouth becomes dry - Rapid weak pulse - Difficulty in breathing **TREATMENT OF DENGUE & DHF** **WHAT TO DO:** - Cases of Dengue fever/Dengue Haemorrhagic Fever (DF/DHF) should be observed every hour. - Serial platelet and haematocrit determinations, drop in platelets and rise in haematocrits are essential for early diagnosis of DHF. - Timely intravenous therapy ñ isotonic crystalloid solution can prevent shock and/or lessen its severity. - If the patient’s condition becomes worse despite giving 20ml/kg/hr for one hour, replace crystalloid solution with colloid solution such as Dextran or plasma. As soon as improvement occurs, replace with crystalloid. - If improvement occurs, reduce the speed from 20 ml to 10 ml, then to 6 ml, and finally to 3 ml/kg. - If haematocrit falls, give blood transfusion 10 ml/kg and then give crystalloid IV fluids at the rate of 10ml/kg/hr. - In case of severe bleeding, give fresh blood transfusion about 20 ml/kg for two hours. Then give crystalloid at 10 ml/kg/hr for a short time (30-60 minutes) and later reduce the speed. In case of shock, give oxygen. For correction of acidosis (sign: deep breathing), use sodium bicarbonate. WHAT NOT TO DO: - Do not give Aspirin or Brufen for treatment of fever. - Avoid giving intravenous therapy before there is evidence of haemorrhage and bleeding. - Avoid giving blood transfusion unless indicated, reduction in haematocrit or severe bleeding. - Avoid giving steroids. They do not show any benefit. - Do not use antibiotics. - Do not change the speed of fluid rapidly, i.e., avoid rapidly increasing or rapidly slowing the speed of fluids. - Insertion of nasogastric tube to determine concealed bleeding or to stop bleeding (by cold lavage) is not recommended since it is hazardous. Kala-Azar What are Signs & Symptoms of Kala-Azar? - Recurrent fever intermittent or remittent with often double rise - loss of appetite, pallor and weight loss with progressive emaciation - weakness - Splenomegaly - spleen enlarges rapidly to massive enlargement, usually soft and nontender - Liver - enlargement not to the extent of spleen, soft, smooth surface, sharp edge - Lymphadenopathy - not very common in India - Skin - dry, thin and scaly and hair may be lost. Light coloured persons show grayish discolouration of the skin of hands, feet, abdomen and face which gives the Indian name Kala-azar meaning "Black fever" - Anaemia - develops rapidly Anaemia with emaciation and gross splenomegaly produces a typical appearance of the patients How Kala-azar is diagnosed? - Clinical: - A case of fever of more than 2 weeks duration not responding to antimalarials and antibiotics. Clinical laboratory findings may include anaemia, progressive leucopenia thrombocytopenia and hypergammaglobulinemia What is the Treatment of Kala-azar? - Kala-azar Drugs available in India - Sodium Stibogluconate (indigenous manufacture, registered for use & sale) - Pentamidine Isethionate: (imported, registered for use) - Amphotericin B: (indigenous manufacture, registered for use and sale) - Liposomal Amphotericin B: (indigenous manufacture & import, registered for use and sale) Fig 2 showing distended abdomen due to massive splenomegaly Miltifosine (imported/ registered for use & sale) Drug Policy under Kala-azar Elimination Programme as per recommendations of Expert Committee (2000) - (This drug policy is under review) Chikungunya Chikungunya usually starts suddenly with fever, chills, headache, nausea, vomiting, joint pain, and rash. In Swahili, chikungunya means "that which contorts or bends up". This refers to the contorted (or stooped) posture of patients who are afflicted with the severe joint pain (arthritis) which is the most common feature of the disease. Frequently, the infection causes no symptoms, especially in children. While recovery from chikungunya is the expected outcome, convalescence can be prolonged and persistent joint pain may require analgesic (pain medication) and long-term anti-inflammatory therapy. Infection appears to confer lasting immunity. Chikungunya is diagnosed by blood tests (ELISA). Since the clinical appearance of both chikungunya and dengue are similar, laboratory confirmation is important especially in areas where dengue is present. Such facilities are, at present, available at National Institute of Virology (NIV), Pune & National Institute of Communicable Diseases (NICD), Delhi. There is no specific treatment for chikungunya. Supportive therapy that helps ease symptoms, such as administration of non-steroidal anti-inflammatory drugs like, and getting plenty of rest, may be beneficial. Infected persons should be isolated from mosquitoes in as much as possible in order to avoid transmission of infection to other people. Eliminating mosquito breeding sites is another key prevention measure. To prevent mosquito bites, do the following: - Use mosquito repellents on skin and clothing - When indoors, stay in well-screened areas. Use bed nets if sleeping in areas that are not screened or air-conditioned. - When working outdoors during day times, wear long-sleeved shirts and long pants to avoid mosquito bite. 126 / Hospital Associated Infections: prevention and Control P.S. Shankar Sys 2 D Drive
AN ORDINANCE OF THE CITY OF PANAMA CITY BEACH, FLORIDA, AMENDING THE CITY’S LAND DEVELOPMENT CODE; AMENDING THE REQUIREMENTS FOR TRADITIONAL OVERLAY DISTRICTS TO PERMIT THEM ON PARCELS OF 3 ACRES OR MORE IN RESIDENTIAL DISTRICTS; REPEALING ALL ORDINANCES OR PARTS OF ORDINANCES IN CONFLICT; PROVIDING FOR CODIFICATION AND PROVIDING AN IMMEDIATELY EFFECTIVE DATE. NOW THEREFORE, BE IT ORDAINED BY THE CITY COUNCIL OF THE CITY OF PANAMA CITY BEACH: SECTION 1. From and after the effective date of this ordinance, Section 7.02.02 of the Land Development Code of the City of Panama City Beach related to Traditional Neighborhood Overlay Districts, is amended to read as follows (new text bold and underlined, deleted text struckthrough): 7.02.02 Traditional Neighborhood Overlay District A. **District Intent:** The general intent of the Traditional Neighborhood Overlay District (TNOD) is to provide a flexible, alternative district, within the Residential and CH zoning districts, to encourage imaginative and innovative housing types and design for the unified Development of tracts of land, within overall density and Use guidelines established herein and in the Comprehensive Plan. This overlay district is characterized by a mixture of functionally integrated housing types and non-Residential Uses as specified in this section. B. **Mixture of Housing Types and Uses Permitted:** A Traditional Neighborhood Overlay District shall be comprised of at least three (3) acres if located in a Residential zoning category, and five (5) acres if located in a CH zone. Properties in this district are required to be developed with at least three (3) distinct types of housing units, each of which shall comprise of at least ten (10) percent of the total land area dedicated to Platted Lots. Examples of distinct types or styles of housing units include Single Family cottages and bungalows, rowhouses, apartment Buildings, multi-Story Single Family Townhomes, Multi-family Dwellings and *Single Family Dwellings*. Acreage dedicated to *Streets*, stormwater, parks, etc... shall not be utilized in the calculation of the ten (10) percent *Lot* minimum. Permitted *Uses* shall be limited to that of the underlying *CH* zoning district. All of the housing types do not have to be developed at the same time, nor is one housing type a prerequisite to another housing type. For the purpose of this section, “properties” refers to the overall parent *Parcel* of land that is assigned the Traditional Neighborhood Overlay district and not individual *Lots* within the parent *Parcel* of land. Whenever property designated for a Traditional Neighborhood shall not be subject to an approved Master Plan as hereinafter provided or upon invalidation of such a Master Plan, the property shall be subject to all land *Development* regulations applicable to the underlying *CH* zoning district generally, as amended from time to time. For the purpose of this section, the Planning Board may recommend to the City Council for approval and inclusion in section 7.02.02D, regulations uniformly applicable to *Manufactured Homes* requiring such foundations, building materials, *Roof* slopes and skirting as will ensure structural and aesthetic compatibility with site built homes. *In CH zoning districts,* *Non-residential Uses* shall be permitted, but not encouraged, in a Traditional Neighborhood Overlay District provided that the applicant can demonstrate that such *Uses* are not only compatible with *Residential Use* but also affirmatively encourage *Residential Use*, such as live-in shops or offices. **C. Density/Intensity** 1. *Residential Land Use* shall not exceed a gross density of the underlying *CH* zoning district. 2. The following intensity standards shall also apply: (a) **Impervious coverage ratio:** Maximum of seventy (70) percent of *Lot* area. *Up to 100% impervious coverage of Lot area may be permitted if the impervious coverage for the overall development tract does not exceed seventy (70) percent.* (b) **Floor Area Ratio** (non-residential *Use* only): Maximum permitted by the underlying *CH* zoning district regulation. (c) **Building Height:** Maximum permitted by the underlying *CH* zoning district regulation. (d) **Open Space:** Minimum of thirty (30) percent of *Lot* area. (e) Nothing in this section shall be utilized as a basis to exceed the maximum densities or intensities mandated by the City’s Comprehensive Plan. D. Development Standards and Procedures for Approval: Upon approval by the Planning Board as provided in this subsection and approval of a Plat by the City Council in accordance with LDC, the Traditional Neighborhood Overlay District is intended to permit variation in Lot size, shape, width, depth, roadway standards and Building Setbacks as will not be inconsistent with the Comprehensive Plan and the density/intensity standards specified in this subsection and as will ensure compatibility with adjoining Development and adjoining Land Uses. Innovative Development standards and principles are encouraged. The following Lot and Building standards shall apply: (a) Minimum Lot Area: 1,250 square feet (b) Minimum Lot Width at Front Setback: 25 feet (c) Minimum Front Yard: 5 feet for roads internal to the Development. A minimum Setback of 25 feet is required adjacent to public roads that abut properties external to the Development. (d) Minimum Side Yard: Interior (to the Development) 0 feet Exterior (adjacent to Parcels exterior to the Development): One Story: 5 feet Two Stories: 7 ½ feet Three Stories: 10 feet Four Stories and Over: 10 feet Plus 4 inches per each foot of Building Height over 40 feet (e) Minimum Side Yard, Street: 5 feet for roads internal to the Development 15 feet adjacent to public roads that abut properties external to the Development. (f) Minimum Rear Yard: Interior (to the Development): 0 feet Exterior (adjacent to Parcels exterior to the Development): 10 feet plus 4 inches per each foot of Building Height over 40 feet E. Master Plan: A Master Plan shall be submitted by all owners of the property to be subjected to the Master Plan (collectively the “applicant”) to the Building and Planning Department for review by the Planning Board. The Master Plan shall include, but not be limited to, all of the following: 1. A statement of objectives describing the general purpose and character of the proposed Development including type of structures, Uses, Lot sizes and Setback. 2. A vicinity map showing the location of the proposed Development. 3. A boundary survey and legal description of the property. 4. Detailed perimeter buffering and landscaping plan. 5. Locations and sizes of Land Uses including a plan graphically depicting location, height, density, intensity and massing of all Buildings. The plan shall additionally depict the location of all parking areas, Access points, points of connectivity to surrounding neighborhoods and similar areas that will be utilized for any purpose other than landscaping. 6. Location, type and density of housing types. 7. Detail of proposed roadway standards. 8. Type of zoning districts and existing Uses abutting the proposed Traditional Neighborhood Overlay district boundaries. 9. A detailed, written list and complete explanation of how the proposed Traditional Neighborhood is consistent with the requirements of this section. 10. The timeline for Development of the Traditional Neighborhood, including Development phases if applicable and setting forth benchmarks for monitoring the progress of construction of each phase, which benchmarks shall include, wherever applicable, land clearing, soil stabilization, construction of each landscaping element of horizontal infrastructure (roads, utilities, drainage, et cetera) and vertical infrastructure and improvements. The Final Development Plan shall be submitted within one (1) year of Master Plan approval. The timeline must show that construction of the horizontal improvements will be commenced and substantially completed within one (1) year and two (2) years, respectively, after approval of the Final Development Plan; provided that in the event the Traditional Neighborhood is divided into phases, the timeline must show that construction of Phase I horizontal improvements will be commenced and substantially completed within one (1) year and two (2) years, respectively, after approval of the Final Development Plan and that the horizontal infrastructure for all remaining phases will be substantially completed within four (4) years after approval of the first Final Development Plan. In addition, the timeline must provide that ninety (90) percent of the land area of the Traditional Neighborhood, excluding horizontal infrastructure, will be built-out to its intended, final Use within ten (10) years. 11. Other applicable information as required on the Application for Master Plan Approval. F. **Master Plan is Conceptual:** This section shall not be construed so as to require detailed engineering or *Site Plan* drawings as a prerequisite to approval by the Planning Board. An applicant may provide a concept plan showing the general types and locations of proposed *Development*, *Open Space*, conservation areas, etc. (bubble plan); however, detailed drawings and information consistent with the approved Master Plan will be required prior to issuance of a *Local Development Order* for any phase(s) of *Development*. In the event that the Master Plan contains no provision for a particular matter that is regulated in the underlying CH district, then the *Local Development Order* shall be consistent with both the approved Master Plan and all regulations applicable within the underlying CH district generally. G. **Master Plan Approval Not by Right:** A property owner has no legal right for approval of a Master Plan. Rather, the *City* shall approve a Master Plan only when it has determined that the applicant has demonstrated, to the satisfaction of the *City*, that the Master Plan provides a sufficient *Development* plan that provides a mixture of housing types, is compatible with adjacent properties, is consistent with this section, applicable local, state and federal regulations and is consistent with the comprehensive plan. H. **Conditions of Approval:** In order to approve a Master Plan or any revision thereto, the Planning Board shall first determine, in a public hearing after notice, that the following conditions (among others it deems appropriate) are met by the applicant: 1. That the *Development* is planned as one complex *Land Use* rather than as an aggregation of individual and unrelated *Buildings* and *Uses*. 2. That the applicant has met the intent of this section by allocating sufficient acreage for *Development* of at least three housing types as listed in section 7.02.02B. I. **Progress Report to Planning Board:** Upon Master Plan approval, the applicant shall submit a Progress Report to the Planning Board no later than the dates as stated in the Master Plan. The Progress Report shall give a summary of the *Development* of the Traditional Neighborhood to date including number of *Dwelling Units*, protection of natural resources, unanticipated events that have taken place and other benchmarks that measure progress in completing the approved Master Plan. J. **Revisions to an Approved TNOD Master Plan:** Revisions to an approved *TNOD* Master Plan shall be made in accordance with section 10.15.00 of this *LDC*. K. **Final Development Plan:** Either concurrently or within one (1) year following zoning and Master Plan approval, all the owners of all or a portion of the property to the Master Plan shall submit one or more Final Development Plans covering all or part of the approved Master Plan. In the event that all the owners of the property subject to the Master Plan are not required to submit a Final Development Plan for a portion of the approved Master Plan, the remaining owners must at least consent in writing to that Final Development Plan. The Final Development Plan shall be reviewed by the Building and Planning Department for consistency with the approved Master Plan. A Local Development Order may be issued if the Department finds the Final Development Plan consistent with the Master Plan. 1. The Final Development Plan shall include all of the following: (a) Boundary survey and legal description of the property. (b) A vicinity map showing the location of the proposed Development. (c) The location of all proposed Building sites including height of structures and Setbacks indicating the distance from property lines, proposed and existing Streets, other Buildings and other man-made or natural features which would be affected by Building Encroachment. (d) A table showing the acreage for each Land Use category, housing types and the average Residential density. (e) Lot sizes. (f) Common Open Spaces that are Useable and operated by the developer or dedicated to a homeowner association or similar group. Common Open Space may contain such Recreational structures and improvements as are desirable and appropriate for the common benefit and enjoyment of residents of the Traditional Neighborhood. (g) All Streets, thoroughfares, Access ways and pedestrian interconnections shall be designed to effectively relate to the major thoroughfares and maintain the capacity of existing and future roadways. Consistency with this requirement shall be determined by the Engineering Department. (h) Development adjacent to existing Residential areas or areas zoned for Residential Use shall be designed to reduce intrusive impact upon the existing Residential Uses. (i) Development shall be clustered away from environmentally sensitive features onto less environmentally sensitive features. Gross densities shall be calculated on the overall site. (j) A utility service plan including sanitary sewer, storm drainage and potable water. (k) A statement indicating the type of legal instruments that will be created to provide for management of common areas. (I) If the project is to be phased, boundaries of each phase shall be indicated. 2. Construction and Development of the Traditional Neighborhood shall be completed in strict compliance with the timeline set forth in the Master Plan. The Planning Board may, upon good cause shown at a regular or special meeting, extend the period for beginning and completing construction of any benchmark, provided that the aggregate of all such extensions shall not exceed a period of one (1) year. Further extensions of time to complete a benchmark shall require an amendment to the Master Plan to amend the time line. 3. Unified Ownership: A property must be under single ownership or under unified control at the time the Traditional Neighborhood Overlay district is assigned, the Master Plan is approved and the Local Development Order is approved. 4. Interpretations: Any interpretation by the City staff in the review of the Final Development Plan may be appealed to the Planning Board. (Ord. No. 925, §1, 2-24-05) (Ord. #1254, 11/14/13) SECTION 2. All ordinances or parts of ordinances in conflict herewith are repealed to the extent of such conflict. SECTION 3. The appropriate officers and agents of the City are authorized and directed to codify, include and publish in electronic format the provisions of this Ordinance within the Panama City Beach Land Development Code, and unless a contrary ordinance is adopted within ninety (90) days following such publication, the codification of this Ordinance shall become the final and official record of the matters herein ordained. Section numbers may be assigned and changed whenever necessary or convenient. SECTION 4. This Ordinance shall take effect immediately upon passage. PASSED, APPROVED AND ADOPTED at the regular meeting of the City Council of the City of Panama City Beach, Florida, this 13th day of December, 2018. ATTEST: CITY CLERK EXAMINED AND APPROVED by me this 13th day of December, 2018. Published in the Panama City News Herald on the 26th day of November, 2018. Posted on pcbgov.com on the 14th day of December, 2018.
Locality sensitive hashing: a comparison of hash function types and querying mechanisms Loïc Paulevé, Hervé Jégou, Laurent Amsaleg To cite this version: Loïc Paulevé, Hervé Jégou, Laurent Amsaleg. Locality sensitive hashing: a comparison of hash function types and querying mechanisms. Pattern Recognition Letters, Elsevier, 2010, 31 (11), pp.1348-1358. <10.1016/j.patrec.2010.04.004>. <inria-00567191> Locality sensitive hashing: a comparison of hash function types and querying mechanisms Loïc Paulevé\textsuperscript{a,*}, Hervé Jégou\textsuperscript{b}, Laurent Amsaleg\textsuperscript{c} \textsuperscript{a}ENS Cachan, Antenne de Bretagne, Campus de Ker Lann, Avenue R. Schuman, 35170 Bruz, France \textsuperscript{b}INRIA, Campus de Beaulieu, 35042 Rennes Cedex, France \textsuperscript{c}CNRS/IRISA, Campus de Beaulieu, 35042 Rennes Cedex, France Abstract It is well known that high-dimensional nearest-neighbor retrieval is very expensive. Dramatic performance gains are obtained using approximate search schemes, such as the popular Locality-Sensitive Hashing (LSH). Several extensions have been proposed to address the limitations of this algorithm, in particular, by choosing more appropriate hash functions to better partition the vector space. All the proposed extensions, however, rely on a \textit{structured} quantizer for hashing, poorly fitting real data sets, limiting its performance in practice. In this paper, we compare several families of space hashing functions in a real setup, namely when searching for high-dimension SIFT descriptors. The comparison of random projections, lattice quantizers, k-means and hierarchical k-means reveal that \textit{unstructured} quantizer significantly improves the accuracy of LSH, as it closely fits the data in the feature space. We then compare two querying mechanisms introduced in the literature with the one originally proposed in LSH, and discuss their respective merits and limitations. 1. Introduction Nearest neighbor search is inherently expensive due to the \textit{curse of dimensionality} (Böhm et al., 2001; Beyer et al., 1999). This operation is required by many pattern recognition applications. In image retrieval or object recognition, the numerous descriptors of an image have to be matched with those of a descriptor dataset (direct matching) or a codebook (in bag-of-features approaches). Approximate nearest-neighbor (ANN) algorithms are an interesting way of dramatically improving the search speed, and are often a necessity. Several \textit{ad hoc} approaches have been proposed for vector quantization (see Gray and Neuhoff, 1998, for references), when finding the exact nearest neighbor is not mandatory as long as the reconstruction error is limited. More specific ANN approaches performing content-based image retrieval using local descriptors have been proposed (Lowe, 2004; Leijse et al., 2006). Overall, one of the most popular ANN algorithms is the Euclidean Locality-Sensitive Hashing (E2LSH) (Datar et al., 2004; Shakhnarovich et al., 2006). LSH has been successfully used in several multimedia applications (Ke et al., 2004; Shakhnarovich et al., 2006; Matei et al., 2006). Space hashing functions are the core element of LSH. Several types of hash functions have recently been proposed to improve the performance of E2LSH, including the Leech lattices (Shakhnarovich et al., 2006) and E8 lattices (Jégou et al., 2008a), which offer strong quantization properties, and spherical hash functions (Terasawa and Tanaka, 2007) for unit-norm vectors. These structured partitions of the vector space are regular, i.e., the size of the regions are of equal size, regardless the density of the vectors. Their advantage over the original algorithm is that they exploit the vectorial structure of the Euclidean space, offering a partitioning of the space which is better for the Euclidean metric than the separable partitioning implicitly used in E2LSH. One of the key problems when using such a regular partitioning is the lack of adaptation to real data. In particular the underlying vector distribution is not taken into account. Besides, simple algorithms such as a k-means quantizer have been shown to provide excellent vector search performance in the context of image and video search, in particular when used in the so-called \textit{Video-Google} framework of Sivic and Zisserman (2003), where the descriptors are represented by their quantized indexes. This framework was shown to be equivalent to an approximate nearest neighbor search combined with a voting scheme in Jégou et al. (2008b). However, to our knowledge this k-means based partitioning has never been evaluated in the LSH framework, where multiple hash functions reduce the probability that a vector is missed, similar to what is done when using randomized trees (Muja and Lowe, 2009). The first contribution of this paper is to analyze the individual performance of different types of hash functions on real data. For this purpose, we introduce the performance metrics, as the one usually proposed for LSH, namely the “$\varepsilon$-sensitivity”, does not properly reflect objective function used in practice. For the data, we focus on the established SIFT descriptor (Lowe, 2004), which is now the standard for image local description. Typical applications of this descriptor are image retrieval (Lowe, 2004; Nistér and Stewénius, 2006; Jégou et al., 2008b), stitching (Brown and Lowe, 2007) and object classification (Zhang et al., 2007). Our analysis reveal that the k-means quantizer, i.e., an unstructured quantizer learned on a vector training set, behaves significantly better than the hash functions used in the literature, at the cost of an increased pre-processing cost. Note that several approximate versions of k-means have been proposed to improve the efficiency of pre-processing the query. We analyze the performance of one of the popular hierarchical k-means (Nistér and Stewénius, 2006) in this context. Second, inspired by state of the art methods that have proved to increase the quality of the results returned by the original LSH scheme, we propose two variants of the k-means approach offering different trade-offs in terms of memory usage, efficiency and accuracy. The relevance of these variants are shown to depend on the database size. The first one, multi-probe LSH (Lv et al., 2007), decreases the query pre-processing cost. It is therefore of interest for datasets of limited size. The second variant, query-adaptive LSH (Jégou et al., 2008a), improves the expected quality of the returned vectors at the cost of increased pre-processing. It is of particular interest when the number of vectors is huge. In that case, the query preparation cost is negligible compared to that of post-processing the vectors returned by the indexing structure. This paper is organized as follows. Section 2 briefly describes the LSH algorithm, the evaluation criteria used to measure the performance of ANN algorithms in terms of efficiency, accuracy and memory usage, and presents the dataset used in all performance experiments. An evaluation of individual hash functions is proposed in Section 3. We finally present the full k-means LSH algorithm in Section 4, and the two variants for the querying mechanism. 2. Background This section first briefly presents the background material for LSH that is required to understand the remainder of this paper. We then detail the metrics we are using to discuss the performance of all approaches. We finally present the dataset derived from real data that is used to perform the experiments. 2.1. Locality sensitive hashing Indexing $d$-dimensional descriptors with the Euclidean version E2LSH of LSH (Shakhnarovich et al., 2006) proceeds as follows. The $n$ vectors of the dataset to index are first projected onto a set of $m$ directions characterized by the $d$-dimensional vectors $(a_i)_{1 \leq i \leq m}$ of norm 1. Each direction $a_i$ is randomly drawn from an isotropic random generator. The $n$ descriptors are projected using $m$ hash functions, one per $a_i$. The projections of the descriptor $x$ are defined as $$h_i(x) = \left\lfloor \frac{\langle x | a_i \rangle - b_i}{w} \right\rfloor,$$ where $w$ is the quantization step chosen according to the data (see (Shakhnarovich et al., 2006)). The offset $b_i$ is uniformly generated in the interval $[0, w)$ and the inner product $\langle x | a_i \rangle$ is the projected value of the vector onto the direction $a_i$. The $m$ hash functions define a set $\mathcal{H} = \{h_i\}_{1 \leq i \leq m}$ of scalar hashing functions. To improve the hashing discriminative power, a second level of hash functions, based on $\mathcal{H}$, is defined. This level is formed by a family of $l$ functions constructed by concatenating several functions from $\mathcal{H}$. Hence, each function $g_j$ of this family is defined as $$g_j = (h_{j,1}, \ldots, h_{j,d^*}),$$ where the functions $h_{j,i}$ are randomly chosen from $\mathcal{H}$. Note that this hash function can be seen as a quantizer performed on a subspace of dimension $d^*$. At this point, a vector $x$ is indexed by a set of $l$ vector of integers $g_j(x) = (h_{j,1}(x), \cdots, h_{j,d^*}(x)) \in \mathbb{Z}^k$. The next step stores the vector identifier within the cell associated with this vector value $g_j(x)$. Note additional steps aiming at avoiding the collision in hash-buckets of distant vectors are performed, but can be ignored for the remainder of this paper, see (Shakhnarovich et al., 2006) for details. At run time, the query vector $q$ is also projected onto each random line, producing a set of $l$ integer vectors $\{g_1(q), \cdots, g_l(q)\}$. From that set, $l$ buckets are determined. The identifiers of all the vectors from the database lying in these buckets make the result short-list. The nearest neighbor of $q$ is found by performing an exact search ($L_2$) within this short-list, though one might prefer using another criterion to adapt a specific problem, as done in (Casey and Slaney, 2007). For large datasets, this last step is the bottleneck in the algorithm. However, in practice, even the first steps of E2LSH can be costly, depending on the parameter settings, in particular on $l$. 2.2. Performance Metrics LSH and its derivatives (Gionis et al., 1999; Terasawa and Tanaka, 2007; Andoni and Indyk, 2006) are usually evaluated using the so-called “$\varepsilon$-sensitivity”, which gives a intrinsic theoretical performance of the hash functions. However, for real data and applications, this measure does not reflect the objective which is of interest in practice: how costly will be the query, and how likely will the true nearest neighbor of the query point be in the result? Hereafter, we address these practical concerns through the use of the following metrics: **Accuracy: recall** Measuring the quality of the results returned by ANN approaches is central. In this paper, we will be measuring the impact of various hashing policies and querying mechanisms on the accuracy of the result. To have a simple, reproducible and objective baseline, we solely measure the accuracy of the result by checking whether the nearest neighbor of each query point is in the short-list or not. This measure then corresponds to the probability that the nearest neighbor is found if an exhaustive distance calculation is performed on the elements of the short-list. From an application point of view, it corresponds for instance to the case where we want to assign a SIFT descriptor to a visual word, as done in (Sivic and Zisserman, 2003). The recall of the nearest neighbor retrieval process is measured, on average, by aggregating this observation (0 or 1) over a large number of queries. Note that instead, we could have performed a $k$-nn retrieval for each query. Sticking to the nearest neighbor avoids choosing an arbitrary value for $k$. **Search complexity** Finding out how costly an ANN approach is has major practical consequences for real world applications. Therefore, it is key to measure the overall complexity of the retrieval process in terms of resource consumption. The LSH algorithm, regardless of the hashing options, has two major phases, each having a different cost: - **Phase 1: Query preparation cost.** With LSH, some processing of the query vector $q$ must take place before the search can probe the index. The query descriptor must be hashed into a series of $l$ values. These values identify the hash-buckets that the algorithm will subsequently analyze in detail. Depending on the hash function type, the cost of hashing operations may vary significantly. It is a function of the number of inner products and/or comparisons with database vectors to perform in order to eventually get the $l$ values. That *query preparation cost* is denoted by $qpc$ and is given in Section 3 for several hash function types. - **Phase 2: Short-list processing cost: selectivity.** Depending on the density of the space nearby $q$, the number of vectors found in the $l$ hash-buckets may vary significantly. For a given $q$, we can observe the *selectivity* of the query, denoted by $sel$. **Definition:** The selectivity $sel$ is the fraction of the data collection that is returned in the short-list, on average, by the algorithm. In other words, multiplying the selectivity by the number of indexed vectors gives the expected number of elements returned as potential nearest neighbors by the algorithm. The number of memory cells to read and the cost of processing the short-list are both a linear function of the short-list length, hence of $sel$. In the standard LSH algorithm, it is possible to estimate this selectivity from the probability mass function of hash values, as discussed later in the subsection 3.5. If exhaustive distance calculation is performed on the short-list returned by LSH, the overall cost for retrieving the ANN of a query vector is expressed as $$ocost = sel \times n \times d + qpc.$$ (3) An interesting measure is the acceleration factor $ac$ over exhaustive search, which is given by $$ac = \frac{n \times d}{ocost} = \frac{1}{sel + \frac{qpc}{n \times d}}.$$ (4) For very large vector collections, the selectivity term is likely to dominate the query preparation cost in this equation, as hash-buckets tends to contain many vectors. This is the rationale for using the selectivity as the main measurement. **Memory usage** The complexity of the search also includes usage of the main memory. In this paper, we assume that the complete LSH data structure fits in main memory. Depending on the strategy for hashing the vectors, more or less main memory is needed. As the memory occupation has a direct impact on the scalability of search systems, it is worth noticing that in LSH, this memory usage is proportional to the number of hash functions considered and to the number of database vectors: $$\text{memory usage} = O(l \times n).$$ (5) The number of hash functions used in LSH will hence be used as the main measurement of the memory usage. ### 2.3. Dataset Our vector dataset is extracted from the publicly available INRIA Holidays dataset\(^1\), which is composed of high-definition real Holiday photos. There are many series of images with a large variety of scene types (natural, man-made, water and fire effects, etc). Each series contains somehow visually similar images, differing however due to various rotations, viewpoint and illumination changes of the same scenes taken by the photographers. These images have been described using the SIFT descriptor (Lowe, 2004), for which retrieving the nearest neighbors is a very computationally-demanding task. SIFT descriptors have been obtained on these images using the affine co-variant features extractor from Mikolajczyk and Schmid (2004). When using the standard parameters of the literature, the dimensionality of SIFT descriptors is $d = 128$. The descriptor collection used in this paper is a subsample of 1 million descriptors randomly picked from the descriptors of the image dataset. We also randomly picked 10,000 descriptors used as queries, and another set of 1 million vectors extracted from a distinct image dataset (downloaded from Flickr) for the methods requiring a learning stage. Finally, we ran exact searches using the Euclidean distance to get the true nearest neighbor of each of these query descriptors. This ground-truth will be the one against which ANN searches will be compared. ### 3. Hash function evaluation This section first discusses the key design principles behind four types of hash functions and the key parameters that matter for evaluating their performance in the context of LSH. We first recall the original design, where hash functions are based on random projections. Following recent literature (Andoni and Indyk, 2006; Jégou et al., 2008a), we then describe high-dimensional lattices used for spatial hashing. These two types of hash functions belong to the family of *structured* quantizers, and therefore do not capture the peculiarities of the data collection’s distribution in space. To contrast with these approaches, we then discuss the salient features of a k-means *unstructured* \(^1\)http://lear.inrialpes.fr/people/jegou/data.php quantizer for hashing, as well as one if its popular tree-based variant. Overall, choosing a hash function in LSH amounts to modifying the definition of $g$ introduced in Section 2.1. Enunciated these design methods, we then move to evaluate how each type of hash function performs on the real data collection introduced above. The performance is evaluated for a single hash function. This accurately reflects the intrinsic properties of each hash function, while avoiding to introduce the parameter $l$ (number of distinct hash functions). ### 3.1. Random projection based The foundations of the hash functions used in the original E2LSH approach have been presented in Section 2.1. Overall, the eventual quantization of the data space is the result of a product of unbounded scalar quantizers. The key parameters influencing the performance of each E2LSH hash function are: - the quantization step $w$; - the number $d^*$ of components used in the second-level hash functions $g_j$. As the parameters $b_i$ and $m$ are provided to improve the diversity between different hash functions, they are arbitrarily fixed, as we only evaluate the performance of a single hash function. The values chosen for these parameters do not noticeably impact the selectivity, though large values of $m$ linearly impacts the query preparation cost. This one remains low for this structured hashing method. ### 3.2. Hashing with lattices Lattices have been extensively studied in mathematics and physics. They were also shown to be of high interest in quantization (Gray and Neuhoff, 1998; Conway and Sloane, 1982b). For a uniform distribution, they give better performance than scalar quantizers (Gray and Neuhoff, 1998). Moreover, finding the nearest lattice point of a vector can be performed with an algebraic method (Agrell et al., 2002). This is referred to as *decoding*, due to its application in compression. A lattice is a discrete subset of $\mathbb{R}^{d'}$ defined by a set of vectors of the form $$\{x = u_1a_1 + \cdots + u_da_d | u_1, \cdots, u_d \in \mathbb{Z}\}$$ where $a_1, \cdots, a_d$ are linearly independent vectors of $\mathbb{R}^{d'}$, $d' \geq d$. Hence, denoting by $A = [a_1 \cdots a_d]$ the matrix whose columns are the vectors $a_j$, the lattice is the set of vectors spanned by $Au$ when $u \in \mathbb{Z}^d$. With this notation, a point of a lattice is uniquely identified by the integer vector $u$. Lattices offer a regular infinite structure. The Voronoi region around each lattice points has identical shape and volume (denoted by $\mathcal{V}$) and is called the *fundamental region*. By using lattice-based hashing, we aim at exploiting their spatial consistency: any two points decoded to the same lattice point are separated by a bounded distance, which depends only on the lattice definition. Moreover, the maximum distance between points inside a single lattice cell tends to be identical for some particular lattices. In the rest of this paper, we refer to this phenomenon as the *vectorial gain*. Vectorial gain is strongly related to the *density* of lattices. The density of a lattice is the ratio between the volume $\mathcal{V}$ of the fundamental region and the volume of its inscribed sphere. Basically, considering Euclidean lattices, the closer to 1 the density, the closer to a sphere the fundamental region, and the greater the vectorial gain. Figure 1 illustrates the vectorial gain for two 2-d lattices having fundamental region of identical volume. In other terms, if $L_2(x,y)$ is the Euclidean distance between $x$ and $y$, and $\mathcal{V}_a$ (respectively $\mathcal{V}_b$) is the closed domain of vectors belonging to the region depicted on Figure 1(a) (respectively Figure 1(b)), then: $$\max_{x_a \in \mathcal{V}_a, y_a \in \mathcal{V}_a} L_2(x_a, y_a) \gg \max_{x_b \in \mathcal{V}_b, y_b \in \mathcal{V}_b} L_2(x_b, y_b)$$ (7) where $\int_{x_a \in \mathcal{V}_a} dx_a = \int_{x_b \in \mathcal{V}_b} dx_b$ (i.e., for identical volumes). In this paper, we will focus on some particular lattices for which fast decoding algorithms are known. These algorithms take advantage of the simplicity of the lattice definition. We briefly introduce the lattices $D_d$, $D^+_d$ and $A_d$. More details can be found in Conway et al. (1987, chap. 4). - **Lattice** $D_d$ is the subset of vectors of $\mathbb{Z}^d$ having an even sum of the components: $$D_d = \{(x_1, \cdots, x_d) \in \mathbb{Z}^d : \sum_{i=1}^{d} x_i \text{ even}\}, d \geq 3.$$ (8) - **Lattice** $D^+_d$ is the union of the lattice $D_d$ with the lattice $D_d$ translated by adding $\frac{1}{2}$ to each coordinate of lattice points. That translation is denoted by $\frac{1}{2} + D_d$. $$D^+_d = D_d \cup (\frac{1}{2} + D_d).$$ (9) When $d = 8$, this lattice is also known as $E_8$, which offers the best quantization performance for uniform 8-dimensional vectors. - **Lattice** $A_d$ is the subset of vectors of $\mathbb{Z}^{d+1}$ living on the $d$-dimensional hyper-plane where the sum of the components is null: $$A_d = \{(x_0, x_1, \cdots, x_d) \in \mathbb{Z}^{d+1} : \sum_{i=0}^{d} x_i = 0\}.$$ (10) A vector $q$ belonging to $\mathbb{R}^d$ can be mapped to its $d + 1$-dimensional coordinates by multiplying it on the right by the $n$ lines $\times n + 1$ columns matrix: $$\begin{pmatrix} -1 & 1 & 0 & \cdots & 0 & 0 \\ 0 & -1 & 1 & \cdots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & -1 & 1 \end{pmatrix}. \tag{11}$$ For these lattices, finding the nearest lattice point of a given query vector is done in a number of steps that is linear with its dimension (Conway and Sloane, 1982a). The main parameters of a lattice hash function are: - the scale parameter $w$, which is similar to the quantization step for random projections; - the number $d^*$ of components used. Hashing the data collection using a lattice asks first to randomly pick $d^*$ components among the original $d$ dimensions—the natural axes are preserved. Then, given $w$, the appropriate lattice point is assigned to each database vector. The index therefore groups all vectors with the same lattice point identifier into a single bucket. **Remark:** The Leech lattice used in Shakhnarovich et al. (2006) has not been considered here for two reasons. First, it is defined for $d^* = 24$ only, failing to provide any flexibility when optimizing the choice of $d^*$ for performance. Second, its decoding requires significantly more operations compared to the others lattices: 3595 operations per lattice point (Vardy and Be’ery, 1993).\footnote{Note, however, that this number is small compared to what is needed for unstructured quantizers.} ### 3.3. k-means vector quantizer Up to now, we have only considered structured quantizers which do not take into account the underlying statistics of the data, except by the choice of the parameters $w$ and $d^*$. To address this problem, we propose to use an unstructured quantizer learned on a representative set of the vectors to index. Formally, an unstructured quantizer $g$ is defined as a function $$\begin{align*} \mathbb{R} & \rightarrow [1, \ldots, k] \\ x & \rightarrow g(x) = \arg \min_{i=1..k} L_2(x, c(i)) \tag{12} \end{align*}$$ mapping an input vector $x$ to a cell index $g(x)$. The integer $k$ is the number of possible values of $g(x)$. The vectors $c(i), 1 \leq i \leq k$ are called centroids and suffice to define the quantizer. To construct a good unstructured quantizer, a nice choice is to use the popular $k$-means clustering algorithm. In that case, $k$ corresponds to the number of clusters. This algorithm minimizes\footnote{This minimization only ensures to find a local minimum.} the overall distortion of reconstructing a given vector of the learning set using its nearest centroid from the codebook, hence exploiting the underlying distribution of the vectors. Doing so, the potential of vector quantization is fully beneficial since it is able to exploit the vectorial gain. Note that, by contrast to the structured quantizers, there is no random selection of the vector components. Hence, the hashing dimension $d^*$ is equal to the vector dimension $d$, as the quantizer is learned directly on the vector space. However, learning a $k$-means quantizer may take a long time when $k$ is large. In practice, bounding the number of iterations improves the learning stage without significantly impacting the results. In the following, we have set the maximum number of iterations to 20 for SIFT descriptors, as higher values provide comparable results. ### 3.4. Hierarchical k-means Approximate variants of the $k$-means quantizer and the corresponding centroid assignment have been proposed (Nistér and Stewénius, 2006; Philbin et al., 2007) to reduce both the learning stage and the query preparation costs. We evaluate the hierarchical $k$-means (HKM) of (Nistér and Stewénius, 2006), which is one of the most popular approach. The method consists of computing a $k$-means with $k$ relatively small, and to recursively computes a $k$-means for the internal nodes until obtaining a pre-defined tree height. This produces a balanced tree structure, where each internal node is connected to a fixed number of centroids. The search is performed top-down by recursively finding the nearest centroid until a leaf is reached. The method uses two parameters: - the height $h_t$ of the tree; - the branching factor $b_t$. The total number of centroids (leaves) is then obtained as $(b_t)^{h_t}$. **Remark:** The method used in (Philbin et al., 2007) relies on randomized trees. This method was improved in (Muja and Lowe, 2009) by automatic tuning of the parameters. The method was shown to outperform HKM, leading to results comparable to that of standard $k$-means. Therefore, the results we give here for the $k$-means give a good approximation of the selectivity/recall tradeoff that the package of (Muja and Lowe, 2009) would provide, with a lower query preparation cost, however. ### 3.5. Experiments and discussion Figure 2 gives the evaluation of the different types of hash function introduced in this section. For both random projection and lattices, the two parameters $w$ and $d^*$ are optimized. Figure 2 only presents the optimal ones, which are obtained as follows. Given a couple of parameters $w$ and $d^*$, we compute the nearest neighbor recall at a given selectivity. This process is repeated for a set of varying couples of parameters, resulting in a set of tuples associating a selectivity to a nearest neighbor recall. Points plotted on the curves belong to the roof of the convex hull of these numbers. Therefore, a point on the figure corresponds to an optimal parameter setting, the one that gives the best performance obtained for a given selectivity. For the $k$-means hash function, only one parameter has to be fixed: the number of centroids $k$, which gives the trade-off between recall and selectivity. This simpler parametrization is an advantage in practice. HKM is parametrized by two quantities: the branching factor $b_t$ and the height $h_t$ of the k-means tree. We evaluate the extremal cases, i.e.,: - a fixed height ($h_t = 2, 3$) with a varying branching factor - and a binary tree ($b_t = 2$) with a varying tree height. ### 3.5.1. Vectorial gain Figure 2 clearly shows that the lattice quantizers provide significantly better results than random projections, due to the vectorial gain. These results confirm that the random projections used in E2LSH are unable to exploit the spatial consistency. Note that this phenomenon was underlined in (Andoni and Indyk, 2006; Jégou et al., 2008a). However, by contrast to these works, the lattices we are evaluating are more flexible, as they are defined for any value of $d^*$. In particular, the lattice $E_8$ used in (Jégou et al., 2008a) is a special case of the $D^+$ lattice. Figure 2 also shows that the various types of lattice perform differently. We observe an improvement of the nearest neighbor recall with lattices $D$ and $D^+$ compared to random projections whereas lattice $A$ gives similar performance. The density of $D^+$ is known to be twice the density of $D$. In high dimensions, the density of $A$ is small compared to that of $D$. Overall, density clearly affects the performance of lattices. However, density is not the only crucial parameter. The shape of the fundamental region and its orientation may also be influential, depending on the distribution of the dataset. Before discussing the performance of the unstructured quantizers evaluated in this paper and shown on Figure 2, it is necessary to put some emphasis on the behavior of quantization mechanisms with respect to the distribution of data and the resulting cardinality in Voronoi cells. ### 3.5.2. Structured vs unstructured quantizers Hashing with lattices intrinsically defines Voronoi cells that all have the same size, that of the fundamental region. This is not relevant for many types of high-dimensional data, as some regions of the space are quite populated, while most are void. This is illustrated by Figure 3, which shows how well the k-means is able to fit the data distribution. Figure 3 depicts the Voronoi diagram associated with the different hash functions introduced in this section, and consider two standard distributions. The dimensions $d = d^* = 2$ are chosen for the sake of presentation. As mentioned above, by construction the structured quantizers (see Figures 3(a) and 3(b)) introduced above lead to Voronoi cells of equal sizes. This property is not desirable in the LSH context, because the number of retrieved points is too high in dense regions and too small in regions yielding small vector density. Considering the k-means quantizer in Figure 3(c), we first observe that for a uniform distribution, the shape of the cells is close to the one of the $A_2$ lattice, which is optimal for this distribution. But k-means is better for other distributions, as the variable volume of the cells adapts to the data distribution, as illustrated for a Gaussian distribution in Figure 3(d). The cell size clearly depends on the vector density. Another observation is that k-means exploits the prior on the bounds of the data, which is not the case of the $A_2$ lattice, whose optimality is satisfied only in the unrealistic setup of unbounded uniform vectors. As a result, for structured quantizers, the cell population is very unbalanced, as shown by Figure 4. This phenomenon penalizes the selectivity of the LSH algorithm. In contrast to these quantizers, the k-means hash function exploits both the vectorial gain and the empirical probability density function proprovided by the learning set. Because the Voronoi cells are quite balanced, the variance of the number of vectors returned for a query is small compared to that of structured quantizers. Turning back to Figure 2, one can clearly observe the better performance of the k-means hash function design in terms of the trade-off between recall and selectivity. For the sake of fairness, the codebook (i.e., the centroids) has been learned on a distinct set: k-means being an unsupervised learning algorithm, learning the quantizer on the set of data would overestimate the quality of the algorithm for a new set of vectors. The improvement obtained by using this hash function construction method is very significant: the selectivity is about two order of magnitude smaller for the same recall. Although HKM is also learned to fit the data, it is inferior to k-means, due to poorer quantization quality. The lower the branching factor is, the closer the results are compared to those of k-means. The two extremal cases depicts in Fig. 2, i.e., 1) a fixed tree height of 2 with varying branching factor and 2) the binary tree ($b_t = 2$) delimits the regions in which all other settings lie. As expected, the performance of HKM in terms of selectivity/recall is the inverse of the one in terms of the query preparation cost. Therefore, considering Equation 3, the trade-off between $b_t$ and $h_t$ appears to be a function of the vector dataset size. ### 3.5.3. Query preparation cost Table 1 shows the complexity of the query preparation cost $qpc$ associated with the different hash functions we have introduced. Note that this table reflects the typical complexity in terms of the number of operations. It could clearly be refined by considering the respective costs of these knowing the architecture on which the hashing is performed. Lattices are the most efficient quantizers, even compared with random projections. Using the k-means hash function is slower than using random projection for typical parameters. HKM is a good compromise, as it offers a relatively low query preparation cost while adapting to the data. | hash function | query preparation cost | |---------------------|------------------------| | random projection | $m \times d + d^* \times l$ | | E2LSH | | | lattice $D_l$ | $d^* \times l$ | | lattice $D^+_l$ | $d^* \times l$ | | lattice $A_d$ | $d^* \times l$ | | k-means | $k \times d \times l$ | | HKM | $b_t \times h_t \times l$ | Table 1: Query preparation cost associated with the different hash functions. ### 4. Querying mechanisms In this section, we detail how the k-means approach is used to build a complete LSH system, and analyze the corresponding search results. The resulting algorithm is referred to as KLSH in the following. We then build upon KLSH by proposing and evaluating more sophisticated strategies, somehow similar in spirit to those recently introduced in the literature, namely multi-probing and query adaptive querying. #### 4.1. KLSH Indexing $d$-dimensional descriptors with KLSH proceeds as follows. First, it is necessary to generate $l$ different k-means clustering using the same learning set of vectors. This diversity is obtained by varying initializations\(^4\) of the k-means. Note that it is very unlikely that these different k-means gives the same solution for $k$ high enough, as the algorithm converges to a local minimum only. Once these $l$ codebooks are generated, each one being represented by its centroids $\{c_{j,1}, \ldots, c_{j,k}\}$, all the vectors to index are read sequentially. A vector to index is assigned to the nearest centroid found in one codebook. All codebooks are used in turn for doing the $l$ assignments for this vector before moving to the next vector to index. Note this mechanism replaces the standard E2LSH $\mathcal{H}$ and $g_j$ hash functions from Section 2. At search time, the nearest centroid for each of the $l$ k-means codebooks is found for the query descriptor. The database vectors assigned to these same centroids are then concatenated in the short-list, as depicted by Algorithm 1. From this point, the standard LSH algorithm takes on for processing the short list. The results for KLSH are displayed Figure 5. One can see that using a limited number of hash functions is sufficient to achieve high recall. A higher number of centroids leads to the best trade-off between search quality and selectivity. However, as indicated section 2.2, the selectivity measures the asymptotic behavior for large datasets, for which the cost of this $qpc$ stage is negligible compared to that of treating the set of vectors returned by the algorithm. For small datasets, the selectivity does not solely reflect the “practical” behavior of the algorithm, as it does not take into \(^4\)This is done by modifying the seed when randomly selecting the initial centroids from the learning set. Algorithm 1 – KLSH, search procedure Input: query vector $q$ Output: short-list $sl$ $sl = \emptyset$ for $j = 1$ to $l$ do // find the nearest centroid of $q$ from codebook $j$: $i* = \arg \min_{i=1,\ldots,k} L_2(q, c_{ji})$ $sl = sl \cup \{x \in \text{cluster}(c_{ji*})\}$ end for Figure 5: Performance LSH with $k$-means hash functions for a varying number $l$ of hash functions. account $gpc$. For KLSH, the overall cost is: $$ocost = sel \times n \times d + k \times l \times d.$$ (13) The acceleration factor therefore becomes: $$ac = \frac{1}{sel + \frac{k \times l}{n}}.$$ (14) Figure 6 shows the acceleration factor obtained for a dataset of one million vectors, assuming that a full distance calculation is performed on the short-list. This factor accurately represents the true gain of using the ANN algorithm when the vectors are stored in main memory. Unlike what observed for asymptotically large datasets, for which the selectivity is dominant, one can observe that there is an optimal value of the quantizer size obtained for $k = 512$. It offers the best trade-off between the query preparation cost and the post-processing of the vectors. Note that this optimum depends on the database size: the larger the database, the larger should be the number of centroids. As a final comment, in order to reduce the query preparation cost for small databases, a approximate k-means quantizer could advantageously replace the standard k-means, as done in (Philbin et al., 2007). Such quantizers assign vectors to cell indexes in logarithmic time with respect to the number of cells $k$, against linear time for standard k-means. This significantly reduces the query preparation cost, which is especially useful for small datasets. 4.2. Multi-probe KLSH Various strategies have been proposed in the litterature to increase the quality of the results returned by the original LSH approach. One series of mechanisms extending LSH uses a so-called multi-probe approach. In this case, at query time, several buckets per hash function are retrieved, instead of one (see Lv et al. (2007) and Joly and Buisson (2008)). Probing multiple times the index increases the scope of the search, which, in turn, increases both recall and precision. Originally designed for structured quantizers, this multiprobe approach can equally be applied to our unstructured scheme with the hope of also improving precision and recall. For the $k$-means hash functions, multi-probing can be achieved as follows. Having fixed the number $m_p$ of buckets that we want to retrieve, for each of the $l$ hash function, we select the $m_p$ closest centroids of the unstructured quantizer $g_j = \{c_{j,1}, \ldots, c_{j,k}\}$. Algorithm 2 briefly presents the procedure. The vectors associated with the selected $m_p$ buckets are then returned for the $l$ hash functions. Note that choosing $m_p = 1$ is equivalent to using the basic KLSH approach. The total number of buckets retrieved is $l \times m_p$. Therefore, for a fixed number of buckets, the number of hash functions is reduced by a factor $m_p$. The memory usage and the query preparation cost are thus divided by this factor. Figure 7 shows the results obtained when using $l = 1$ and varying values of $m_p$, i.e. for a single hash function. The reFigure 7: Multi-probe KLSH for a single hash function ($l = 1$) and varying numbers of visited cells $m_p$. Results are reasonably good, especially considering the very low memory usage associated with this variant. However, comparing Figures 5 and 7, the recall is lower for the same selectivity. This is not surprising, as in KLSH, the vectors which are returned are localized in the same cell, whereas the multi-probe variant returns some vectors that are not assigned to the same centroid. For small datasets, for which the query preparation cost is not negligible, this multi-probe variant is of interest. This is the case for our one million vectors dataset: Figure 8 shows the better performance of the multi-probe algorithm compared to the standard querying mechanism (compare to Figure 6). This acceleration factor compares favorably against state-of-the-art methods of the literature. In a similar experimental setup (dataset of 1 million SIFT descriptors), (Muja and Lowe, 2009) reports, for a recall of 0.90, an acceleration factor lower than 100, comparable to our results but for a higher memory usage: the multi-probe KLSH structure only uses 4 bytes per descriptor for $m_p = 1$. 4.3. Query-adaptive KLSH While multi-probing is one direction for improving the quality of the original structured LSH scheme, other directions exist, like the query-adaptive LSH by Jégou et al. (2008a). In a nutshell, this method adapts its behavior because it picks from a large pool of existing random hash-functions the ones that are the most likely to return the nearest neighbors, on a per-query basis. As it enhances result quality, this principle can be applied to our unstructured approach. Here, instead of using a single k-means per hash function, it is possible to maintain a poll of independent k-means. At query time, the best k-means can be selected for each hash-function, increasing the likelihood of finding good neighbors. Before developing the query-Adaptive KLSH, we must describe the original query-adaptive LSH to facilitate the understanding of the remainder. Query-Adaptive LSH as described in Jégou et al. (2008a) proceeds as follows (this is also summarized in Algorithm 3): - The method defines a pool $l$ of hash functions, with $l$ larger than in standard LSH. - For a given query vector, a relevance criterion $\lambda_j$ is computed for each hash function $g_j$. This criterion is used to identify the hash functions that are most likely to return the nearest neighbor(s). - Only the buckets associated with the $p$ most relevant hash functions are visited, with\footnote{For $p = l$, the algorithm is equivalent to the KLSH.} $p \leq l$. The relevance criterion proposed in (Jégou et al., 2008a) corresponds, for the $E_8$ lattice, to the distance between the query point and the center of the Voronoi cell. We use the same criterion for our KLSH variant. For the query vector $q$, $\lambda$ is defined as $$\lambda(g_j) = \min_{i=1,\ldots,k} L_2(q, c_{j,i}). \quad (15)$$ It turns out that this criterion is a byproduct of finding the nearest centroid. Therefore, for a fixed number $l$ of hash functions, the pre-processing cost is the same as in the regular querying method of KLSH. These values are obtained to select the $p$ best hash function as $$p - \arg \min_{j=1,\ldots,l} A(g_j). \quad (16)$$ The selection process is illustrated by the toy example of Figure 9, which depicts a structure comprising $l = 4$ k-means hash functions. Intuitively, one can see that the location of a descriptor $x$ in the cell has a strong impact on the probability that its nearest neighbor is hashed in the same bucket. On this example, only the second clustering ($j = 2$) puts the query vector and its nearest neighbor in the same cell. Figure 9: Toy example: hash function selection process in query-adaptive KLSH. The length of the segment between the query vector (circled) and its nearest centroid corresponds to the relevance criterion $\lambda_j$ ($j = 1..4$). Here, for $p = 1$, the second hash function ($j = 2$) is used and returns the correct nearest neighbor (squared). Figure 10: Query-adaptive KLSH: performance when using a single hash function among of pool of $l$ hash functions. $l=1,2,3,5,10,20,25,50,100$. For a given number $k$ of clusters, the selectivity is very stable and close to $1/d$: 0.0085 for $k=128$, 0.0021 for $k=512$, 0.00055 for $k=2048$ and 0.00014 for $k=8192$. In order for the query-adaptive KLSH to have interesting properties, one should use a large number $l$ of hash functions. This yields two limitations for this variant: - the memory required to store the hash tables is increased; - the query preparation cost is higher, which means that this variant is interesting only for very large datasets, for which the dominant cost is the processing of the vectors returned by the algorithm. The selection of the best hash functions is not time consuming since the relevance criterion is obtained as a by-product of the vector quantization for the different hash functions. However, this variant is of interest if we use more hash functions than in regular LSH, hence in practice its query preparation cost is higher. For a reasonable number of hash functions and a large dataset, the bottleneck of this query adaptive variant is the last step of the “exact” LSH algorithm. This is true only when the dominant cost is the processing cost of the search for the exact nearest neighbors within the short-list obtained by parsing the buckets. This is not the case in our experiments on one million vectors, in which the acceleration factor obtained for this variant is not as good as those of KLSH and multi-probe KLSH. Figure 10 gives the selectivity obtained, using only one voting hash function ($p = 1$), for varying sizes $l$ of the set of hash functions. Unsurprisingly, the larger $l$, the better the results. However, most of the gain is attained by using a limited number of hash functions. For this dataset, choosing $l = 10$ seems a reasonable choice. Now, using several voting hash functions, i.e., for $p > 1$, one can observe in Figure 11 that the query-adaptive mechanism significantly outperforms KLSH in terms of the trade-off between selectivity and efficiency. In this experiment the size of the pool is fixed to $l = 100$. However, on our “small” dataset of one million descriptors, this variant is not interesting for this large number of hash functions: the cost of the query preparation stage ($k \times l \times d$) is too high with respect to the post-processing stage of calculating the distances on the short-list. This contradictory result stems from the indicator: the selectivity is an interesting complexity measurement for large datasets only, for which the query preparation cost is negligible compared to that of processing the vectors returned by the algorithm. Algorithm 3 – Query-adaptive KLSH, search procedure Input: query vector $q$ Output: short-list $sl$ $s_l = \emptyset$ // select the $p$ hash functions minimizing $\lambda(g_j(q))$: $(j_1, \ldots, j_p) = p - \arg \min_{j_1, \ldots, j_p} \lambda(g_j(q))$ for $j \in (j_1, \ldots, j_p)$ do // find the nearest centroid of $q$ from codebook $j$: $l* = \arg \min_{i=1,\ldots,k} L_2(q, c_{ji})$ $sl = sl \cup \{x \in \text{cluster}(c_{j,i*})\}$ end for 4.4. Discussion 4.4.1. Off-line processing and parametrization One of the drawbacks of k-means hash functions over structured quantizers is the cost of creating the quantizer. This is especially true for the query-adaptive variant, where the number $l$ of hash functions may be high. However, in many applications, this is no a critical point, as this clustering is performed off-line. Moreover, this is somewhat balanced by the fact that KLSH admits a simpler optimization procedure to find the optimal parametrization, as we only have to optimize the parameter $k$, against two parameters $w$ and $d^*$ for the structured quantizer. Finally, approximate k-means reduces the cost of both the learning stage and the query preparation. It is therefore not surprising that the emerging state-of-the-art ANN methods, e.g., (Muja and Lowe, 2009), relies on such partitioning methods. 4.4.2. Which querying mechanism should we use? It appears that each of the querying mechanism proposed in this section may be the best, depending on the context: dataset size, vector properties and resource constraints. The less memory-demanding method is the multi-probe version. For very large datasets and with no memory constraint, query-adaptive KLSH gives the highest recall for a fixed selectivity. If, for the fine verification, the vectors are read from a low-efficiency storage device, e.g., a mechanical hard drive, then the query-adaptive version is also the best, as in that case the bottleneck is to read the vectors from the disk. As a general observation, multi-probe and query adaptive KLSH offer opposite properties in terms of selectivity, memory usage and query preparation cost. The “regular” KLSH is in between, offering a trade-off between these parameters. Overall, the three methods are interesting, but for different operating points. 5. Conclusion In this paper, we have focused on the design of the hash functions and the querying mechanisms used in conjunction with the popular LSH algorithm. First, confirming some results of the literature in a real application scenario, we have shown that using lattices as stronger quantizers significantly improve the results compared to the random projections used in the Euclidean version of LSH. Second, we have underlined the limitations of structured quantizers, and show that using unstructured quantizers as a hash functions offer better performance, because it is able to take into account the distribution of the data. The results obtained by k-means LSH are appealing: very high recall is obtained by using only a limited number of hash functions. The speed-up over exhaustive distance calculation is typically greater than 100 on a one million vector dataset for a reasonable recall. Finally, we have adapted and evaluated two recent variants of the literature, namely multi-probe LSH and query-adaptive LSH, which offer different trade-offs in terms of memory usage, complexity and recall. Acknowledgements The authors would like to thank the Quaero project for its financial support. References Agrell, E., Eriksson, T., Vardy, A., Zeger, K., 2002. Closest point search in lattices. IEEE Trans. on Information Theory 48 (8), 2201–2214. Andoni, A., Indyk, P., 2006. Near-optimal hashing algorithms for near neighbor problem in high dimensions. In: Proceedings of the Symposium on the Foundations of Computer Science. pp. 459–468. Beyer, K., Goldstein, J., Ramakrishnan, R., Shaft, U., August 1999. When is “nearest neighbor” meaningful? In: Intl. Conf. on Database Theory. pp. 217–235. Böhm, C., Berchtold, S., Keim, D., October 2001. Searching in high-dimensional spaces: Index structures for improving the performance of multimedia databases. ACM Computing Surveys 33 (3), 322–373. Brown, M., Lowe, D. G., 2007. Automatic panoramic image stitching using invariant features. International Journal of Computer Vision 74 (1), 59–73. Casey, M., Slaney, M., April 2007. Fast recognition of remixed music audio. In: International Conference on Acoustics, Speech, and Signal Processing. Vol. 4. pp. 1425–1428. Conway, J., Sloane, N., 1982a. Fast quantizing and decoding algorithms for lattice quantizers and codes. IEEE Trans. on Information Theory 28 (2), 227–232. Conway, J., Sloane, N., 1982b. Voronoi regions of lattices, second moments of polytopes, and quantization. IEEE Trans. on Information Theory 28 (2), 211–226. Conway, J., Sloane, N., Bannai, E., 1987. Sphere-packings, lattices, and groups. Springer-Verlag New York, Inc., New York, NY, USA. Datar, M., Immorlica, N., Indyk, P., Mirrokni, V., 2004. Locality-sensitive hashing scheme based on p-stable distributions. In: Proceedings of the Symposium on Computational Geometry. pp. 253–262. Gionis, A., Indyk, P., Motwani, R., 1999. Similarity search in high dimension via hashing. In: Proceedings of the International Conference on Very Large DataBases. pp. 518–529. Gray, R. M., Neuhoff, D. L., Oct. 1998. Quantization. IEEE Trans. on Information Theory 44, 2325–2384. Jégou, H., Amsaleg, L., Schmid, C., Gros, P., 2008a. Query-adaptive locality sensitive hashing. In: International Conference on Acoustics, Speech, and Signal Processing. Jégou, H., Douze, M., Schmid, C., October 2008b. Hamming embedding and weak geometric consistency for large scale image search. In: European Conference on Computer Vision. Joly, A., Buission, O., 2008. A posteriori multi-probe locality sensitive hashing. In: ACM Conf. on Multimedia. pp. 209–218. Ke, Y., Sukthankar, R., Huston, L., 2004. Efficient near-duplicate detection and sub-image retrieval. In: ACM Conf. on Multimedia. pp. 869–876. Leijsek, H., Asmundsson, F., Jönsson, B., Amsaleg, L., 2006. Scalability of local image descriptors: a comparative study. In: ACM Conf. on Multimedia. pp. 589–598. Lowe, D., 2004. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60 (2), 91–110. Lv Q., Josephson, W., Wang, Z., Charikar, M., Li, K., 2007. Multi-probe LSH: Efficient indexing for high-dimensional similarity search. In: Proceedings of the International Conference on Very Large DataBases. pp. 950–961. Matei, B., Shan, Y., Sawhney, H., Tan, Y., Kumar, R., Huber, D., Hebert, M., July 2006. Rapid object indexing using locality sensitive hashing and joint 3D-signature space estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence 28 (7), 1111 – 1126. Mikolajczyk, K., Schmid, C., 2004. Scale and affine invariant interest point detectors. International Journal of Computer Vision 60 (1), 63–86. Muja, M., Lowe, D. G., 2009. Fast approximate nearest neighbors with automatic algorithm configuration. In: International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. Nistér, D., Stewénius, H., 2006. Scalable recognition with a vocabulary tree. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2161–2168. Philbin, J., Chum, O., Isard, M., Sivic, J., Zisserman, A., 2007. Object retrieval with large vocabularies and fast spatial matching. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Shakhnarovich, G., Darrell, T., Indyk, P., March 2006. Nearest-Neighbor Methods in Learning and Vision: Theory and Practice. MIT Press, Ch. 3. Sivic, J., Zisserman, A., 2003. Video Google: A text retrieval approach to object matching in videos. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 1470–1477. Terasawa, K., Tanaka, Y., 2007. Spherical LSH for Approximate Nearest Neighbor Search on Unit Hypersphere. Springer, pp. 27–38. Vardy, A., Be’ery, Y., Jul 1993. Maximum likelihood decoding of the leech lattice. Information Theory, IEEE Transactions on 39 (4), 1435–1444. Zhang, J., Marszalek, M., Lazebnik, S., Schmid, C., June 2007. Local features and kernels for classification of texture and object categories: A comprehensive study. International Journal of Computer Vision 73, 213–238.
| Morning | 8:30 | 8:45 | 9:00 | 9:15 | 9:30 | 9:45 | 10:00 | 10:15 | 10:30 | 10:45 | 11:00 | 11:15 | 11:30 | 11:45 | 12:00 | 12:15 | 12:30 | 12:45 | 1:00 | 1:15 | 1:30 | |---------|------|------|------|------|------|------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-----|-----|-----| | FUNNEL VISION | Mickey's Christmas Carol (G) | Lilo & Stitch (PG) | Port Shopping Talk | Toy Story (G) | Ant-Man and The Wasp (PG-13) | | BUENA VISTA THEATRE | Family Movie Fun Time: The Incredibles 2 (PG) | Pluto (DK4 Mid (S)) | Goofy (Lobby Atumn (P)) | Pluto (Promenade Lounge) | Goofy (Lobby Atumn (P)) | Spider-Man (DK4 Mid (S)) | Minnie (DK4 Mid (S)) | Minnie (DK4 Mid (S)) | Daisy (DK4 Mid (S)) | Minnie (DK4 Mid (S)) | Minnie (DK4 Mid (S)) | Hook & Snee (Gazebo DK 5) | | CHARACTERS | Captain America (DK4 Mid (S)) | Black Widow (DK4 Mid (S)) | Spider-Man (DK4 Mid (S)) | Daisy (DK4 Mid (S)) | | FUN FOR ALL AGES | Good Morning Disney Wonder Channel 22-1 Repeated 6:05 am - 10:05 pm | Coloring Time Promenade Lounge | Animation: Mickey Mouse Promenade Lounge | Animation: Pluto Promenade Lounge | Bingo Pre Sales | $5,000 Mega Jackpot BINGO Azure | Crafts: 3D Mickey & Minnie Crafts Promenade Lounge | Toddler Time Promenade Lounge | Cruisin' for Trivia Promenade Lounge | Crafts: Origami Creations Promenade Lounge | Cruisin' Solo Lunch 1820 Society: Lunch Pub Quiz D Lounge | Magic Workshop with Jon Armstrong Azure | | ADULTS | Friends of Bill W. Cadillac Lounge | Informal Card Play Crown & Fin Pub | Art of The Theme Show Tour Palo | Disney Can Cook Anyone Can Cook D Lounge - Wild Mushroom Risotto | Movie Quotes Trivia Crown & Fin Pub | NCAA Bowl Games Crown & Fin Pub (Subject to Satellite Availability) | | VIBE 14-17 YEARS OLD | Youth Activities Open House | Youth Activities Open House | Gotcha Registration | Ice Breakers | Star Wars™ Jedi Challenges | | EDGE 11-14 YEARS OLD | Youth Activities Open House | Star Wars™ Jedi Challenges | Ice Breakers | Make a Show Disney Style | | OCEANER LAB 3-12 YEARS OLD | Gasses in Action | Trivia Time | Piston Cup Challenge | Game Challenge | Lunch | | OCEANER CLUB 3-12 YEARS OLD | Pre-School Fun (3-5 Years Old) | Join the Lion Guard | Elephant Soccer | Parachute Games | Lunch in Disney's Oceaneer Lab | | Afternoon | 1:45 | 2:00 | 2:15 | 2:30 | 2:45 | 3:00 | 3:15 | 3:30 | 3:45 | 4:00 | 4:15 | 4:30 | 4:45 | 5:00 | 5:15 | 5:30 | 5:45 | 6:00 | 6:15 | 6:30 | 6:45 | | FUNNEL VISION | Finding Nemo (G) | Wreck-It Ralph (PG) | Inside Out (PG) | | | BUENA VISTA THEATRE | Solo: A Star Wars Story (PG-13) Duration: 2 Hours 15 Minutes | Mickey (DK4 Mid (S)) | Minnie (DK4 Mid (S)) | Mickey (DK4 Mid (S)) | Pluto (DK4 Mid (S)) | | CHARACTERS | Stitch (Gazebo DK 9) | Daisy (DK5 Mid (S)) | Donald (DK5 Mid (S)) | Goofy (DK4 Mid (S)) | | FUN FOR ALL AGES | Disney Tunes Trivia Promenade Lounge | Diamond & Gemstone Seminar Promenade Lounge | Game Show: The Wheel D Lounge | Bingo Pre Sales | Diamond Jackpot BINGO Azure | Crafts: 3D DCL Ship Crafts Promenade Lounge | Toddler Time Promenade Lounge | Captain's Welcome Reception Lobby Atumn | Red Carpet Arrivals | | | ADULTS | Animation Class Azure | TV Tunes Trivia Azure | Art of The Theme Show Tour Palo | Disney Can Cook Anyone Can Cook D Lounge - Wild Mushroom Risotto | Craft Corner Crown & Fin Pub | NCAA Bowl Games Crown & Fin Pub (Subject to Satellite Availability) | | VIBE 14-17 YEARS OLD | Smoothie Time | Pirates Flash Mob Rehearsal | Shuffleboard | Youth Activities Open House | | EDGE 11-14 YEARS OLD | Game Challenge | Brains and Brawns | Pirates Flash Mob Rehearsal | Brain Teasers | | OCEANER LAB 3-12 YEARS OLD | Open House 4 Square Competition | Open House Create a Postcard | Open House Animation Antics | Open House | Craft Corner | Dinner | | OCEANER CLUB 3-12 YEARS OLD | Captain America's Super Hero 101 | Craft Corner | Sofia's Magical Storytelling | Open House Miles' Cosmic Explorers | Open House Let's Build a Fort | Open House | | Evening | 7:00 | 7:15 | 7:30 | 7:45 | 8:00 | 8:15 | 8:30 | 8:45 | 9:00 | 9:15 | 9:30 | 9:45 | 10:00 | 10:15 | 10:30 | 10:45 | 11:00 | 11:15 | 11:30 | 11:45 | 12:00 | | FUNNEL VISION | The Incredibles (PG) | Maleficent (PG) | Captain America: Civil War (PG-13) | | | BUENA-VISTA THEATRE | Christopher Robin (PG) Duration: 1 Hour 44 Minutes | Minnie (DK4 Mid (S)) | Mickey (DK4 Mid (S)) | Avengers: Infinity War (PG-13) | | CHARACTERS | Minnie (DK4 Mid (S)) | Mickey (DK4 Mid (S)) | | FUN FOR ALL AGES | Captain's Welcome Reception Lobby Atumn | Close Up Magic Azure | Red Carpet Arrivals | Crazy Heart with The Belle Adventure Promenade Lounge | Country Songs Promenade Lounge | | ADULTS | Piano Men Tribute with Aaron Lotzow Cadillac Lounge | Broadway Music with Josh Freilich Cadillac Lounge | Game Show: The Feud Azure | The Musical Comedy of MARCUS MONROE Azure | Silent DJ Party Azure | Cruise Staff DJ Azure | | VIBE 14-17 YEARS OLD | Make a Show Disney Style | Brain Teasers | Marshmallow Olympics | Heroes and Villains | | EDGE 11-14 YEARS OLD | Youth Activities Open House | Marshmallow Olympics | Heroes and Villains | Karaoke in D Lounge | | OCEANER LAB 3-12 YEARS OLD | Wii Challenge | 4th Pigs Pasta Palace | Marshmallow Olympics | Oceaneer Rangers | | OCEANER CLUB 3-12 YEARS OLD | Open House Adventures with Anna MARVEL Super Hero Academy Avenger Assemble | Toy Story Boot Camp | | | | | SATURDAY, DECEMBER 15, 2018 - DAY AT SEA THE GOLDEN MICKEYS WALT DISNEY THEATRE Deck 4, Forward 6:15 pm & 8:30 pm As a courtesy to all Guests, we kindly advise that the saving of seats is not permitted in the Walt Disney Theatre. Assistance is available at the entrance to the Walt Disney Theatre 30 minutes prior to show time. Character Appearances PLUTO Lobby Atrium (P) 9:00 am & 10:00 am DK4 Mid (S) - 5:30 pm CAPTAIN AMERICA DK4 Mid (S) - 9:00 am GOOFY Lobby Atrium (P) 9:30 am & 10:30 am DK4 Mid (S) - 5:00 pm BLACK WIDOW DK4 Mid (S) - 9:45 am SPIDER-MAN DK4 Mid (S) - 10:30 am MINNIE MOUSE DK4 Mid (S) 11:30 am, 12:30 pm, 4:00 pm, 7:30 pm & 9:45 pm DAISY DUCK DK4 Mid (S) - 12:00 pm DK5 Mid (S) - 3:45 pm CAPTAIN HOOK & MR SMEE Gazebo DK 9 - 1:15 pm STITCH Gazebo DK 9 - 1:45 pm MICKEY MOUSE DK4 Mid (S) 3:30 pm, 4:30 pm, 8:00 pm & 10:15 pm DONALD DUCK DK5 Mid (S) - 4:15 pm GENERAL INFORMATION BUENA VISTA THEATRE DECK 5, AFT SHOWTIMES CONNECT@SEA DESK DECK 3, AFT 9:00 am - 12:00 pm 1:00 pm - 4:00 pm 7:00 pm - 10:00 pm DISNEY VACATION CLUB DESK DECK 4, MID 8:30 am - 10:30 am 5:00 pm - 10:00 pm DISNEY VACATION PLANNING DESK DECK 4, MID 9:00 am - 12:00 pm 3:00 pm - 9:00 pm FUNNEL VISION DECK 9, MID SHOWTIMES GUEST SERVICES DECK 3, MID 24 HOURS HAIR BRAIDING DECK 9, MID 9:00 am - 12:00 pm 2:00 pm - 6:00 pm MEDICAL HEALTH CENTER DECK 1, FWD 9:30 am - 11:00 am 4:30 pm - 7:00 pm PORT SHOPPING DESK DECK 3, MID 7:00 pm - 8:30 pm PRELUDES DECK 4, FWD SHOWTIMES SENSES SPA AND SALON DECK 9, FWD 8:00 am - 10:00 pm SHUTTERS PHOTO GALLERY DECK 4, AFT 9:00 am - 1:00 pm 8:30 pm - 10:30 pm WALT DISNEY THEATRE DECK 4, FWD SHOWTIMES SHOPPING ART GALLERY DECK 4, MID BIBBIDI BOBBIDI BOUTIQUE DECK 10, FWD 9:00 am - 9:00 pm MICKEY’S MAINSAIL DECK 4, FWD 9:30 am - 11:00 pm QUACKS DECK 9, MID SEA TREASURES DECK 3, FWD 2:30 pm - 10:00 pm WHITE CAPS DECK 4, FWD 9:30 am - 11:00 pm Verandah Safety Please do not leave any personal items on your verandah. They may be knocked over or present a fire hazard if left unattended. Do not lean or hanging over the railings or the ship’s side. Guests should not sit, lean or climb on the railings or the ship’s side. Do not open the verandah and stateroom doors simultaneously as this may create a wind effect and cause the door to swing unexpectedly. Environmental Message With Disney’s commitment to the environment, please remember to recycle and respect the ship’s deck. Sunscreen/Insect Repellent Advisory Protect against mosquito bites and related illnesses by applying insect repellent on top of sunscreen when going ashore. Cold and Flu Reminder Wash hands frequently, particularly before meals. Contact the Health Center at 7-1927, should you experience symptoms of your party become ill. Smoking For the comfort of our guests, the following areas are designated as smoking areas: • Deck 9, Mid, Port Side • Deck 4, Starboard Side from 6:00 pm to 6:00 am only (all Deck 4 is non-smoking from 6:00 am to 6:00 pm) Smoking is prohibited inside all guest staterooms and on stateroom verandas. Guests found smoking in their staterooms or on their verandas will be charged a $250 stateroom recovery fee. ENTERTAINMENT - LOUNGES - BARS Adults must be 21 and older to consume alcoholic beverages. AZURE (18+ AFTER 9:00 PM) DECK 3, FWD 9:00 PM - 1:30 AM CADILLAC LOUNGE (18+) DECK 3, FWD 6:00 PM - 12:00 AM COVE CAFE (18+) DECK 9, MID 7:00 AM - 12:00 AM CROWN & FIN PUB (18+ AFTER 9:00 PM) DECK 3, FWD 12:00 PM - 12:00 AM D LOUNGE DECK 4, MID SHOWTIMES PINOCCHIO’S PIZZERIA DECK 9, MID 9:30 AM - 12:00 AM PRELUDES DECK 4, FWD SHOWTIMES PROMENADE LOUNGE DECK 3, AFT 8:00 AM - 12:00 AM SIGNALS (18+) DECK 9, FWD 10:00 AM - 10:00 PM SULLEY’S SIPS DECK 9, MID 7:30 PM - 5:30 PM Public Health Advisory: Consuming raw or under-cooked meats, poultry, seafood, shellfish, or eggs may increase your risk for food borne illness, especially if you have certain medical conditions. SPECIALS OF THE DAY Senses Spa & Salon - Rejuvenation Spa Consultations Ready to enhance your natural beauty? Book your free Facial Rejuvenation consultation with our Rejuvenation Doctor at Senses Spa, Deck 9, Forward. EFFY Jewelry Colors of Wonder Trunk Show & Diamond Raffle, 12:30pm, White Caps Prepare to be wowed as we unveil EFFY’s playfully sophisticated Watercolors Collection, featuring stunning, handset precious gemstones! Enter to Win Diamond earrings! Must be +18. Shutters - Say Cheese! Formal Portraits: 5:15 pm - 6:00 pm, 7:15 pm - 8:30 pm and 9:30 pm - 10:30 pm. Families of 8 Guests and above can only be accommodated on the Stairs. IMPORTANT NUMBERS Fire/Security: 7-3001 Medical Emergency: 7-3000 Health Center: 7-1927
Scalable Cluster Computing with MOSIX for LINUX Amnon Barak* Oren La’adan Amnon Shiloh Institute of Computer Science The Hebrew University of Jerusalem Jerusalem 91904, Israel http://www.mosix.cs.huji.ac.il ABSTRACT Mosix is a software tool for supporting cluster computing. It consists of kernel-level, adaptive resource sharing algorithms that are geared for high performance, overhead-free scalability and ease-of-use of a scalable computing cluster. The core of the Mosix technology is the capability of multiple workstations and servers (nodes) to work cooperatively as if part of a single system. The algorithms of Mosix are designed to respond to variations in the resource usage among the nodes by migrating processes from one node to another, preemptively and transparently, for load-balancing and to prevent memory depletion at any node. Mosix is scalable and it attempts to improve the overall performance by dynamic distribution and redistribution of the workload and the resources among the nodes of a computing-cluster of any size. Mosix conveniently supports a multi-user time-sharing environment for the execution of both sequential and parallel tasks. So far Mosix was developed 7 times, for different version of Unix, BSD and most recently for Linux. This paper describes the 7-th version of Mosix, for Linux. This paper describes the Mosix technology for Cluster Computing (CC). Mosix [4, 5] is a set of adaptive resource sharing algorithms that are geared for performance scalability in a CC of any size, where the only shared component is the network. The core of the Mosix technology is the capability of multiple nodes (workstations and servers, including SMP’s) to work cooperatively as if part of a single system. In order to understand what Mosix does, let us compare a Shared Memory (SMP) multicomputer and a CC. In an SMP system, several processors share the memory. The main advantages are increased processing volume and fast communication between the processes (via the shared memory). SMP’s can handle many simultaneously running processes, with efficient resource allocation and sharing. Any time a process is started, finished, or changes its computational profile, the system adapt instantaneously to the resulting execution environment. The user is not involved and in most cases does not even know about such activities. Unlike SMP’s, Computing Clusters (CC) are made of collections of share-nothing workstations and (even SMP) servers (nodes), with different speeds and memory sizes, possibly from different generations. Most often, CC’s are geared for multi-user, time-sharing environments. In CC systems the user is responsible to allocate the processes to the nodes and to manage the cluster resources. In many CC systems, even though all the nodes run the same operating system, cooperation between the nodes is rather limited because most of the operating system’s services are locally confined to each node. The main software packages for process allocation in CC’s are PVM [8] and MPI [9]. LSF [7] and Extreme Linux [10] provide similar services. These packages provide an execution environment that requires an adaptation of the application and the user’s awareness. They include tools for initial (fixed) assignment of processes to nodes, which sometimes use load considerations, while ignoring the availability of other resources, e.g., free memory and I/O overheads. These packages run at the user level, just like ordinary applications, thus are incapable to respond to fluctuations of the load or other resources, or to redistribute the workload adaptively. In practice, the resource allocation problem is much more complex because there are many (different) kinds of resources, e.g., CPU, memory, I/O, Inter Process Communication (IPC), etc, where each resource is used in a different manner and in most cases its usage is unpredictable. Further complexity results from the fact that different users do not coordinate their activities. Thus even if one knows how to optimize the allocation of resources to processes, the activities of other users are most likely to interfere with this optimization. For the user, SMP systems guarantee efficient, balanced use of the resources among all the running processes, regardless of the resource requirements. SMP’s are easy to use because they employ adaptive resource management, that is completely transparent to the user. Current CC’s lack such capabilities. They rely on user’s controlled static allocation, which is inconvenient and may lead to significant performance penalties due to load imbalances. Mosix is a set of algorithms that support adaptive resource sharing in a scalable CC by dynamic process migration. It can be viewed as a tool that takes CC platforms one step closer towards SMP environments. By being able to allocate resources globally, and distribute the workload dynamically and efficiently, it simplifies the use of CC’s by relieving the user from the burden of managing the cluster-wide resources. This is particularly evident in a multi-user, time-sharing environments and in non-uniform CC’s. 2 What is Mosix Mosix [4, 5] is a tool for a Unix-like kernel, such as Linux, consisting of adaptive resource sharing algorithms. It allows multiple Uni-processors (UP) and SMP’s (nodes) running the same kernel to work in close cooperation. The resource sharing algorithms of Mosix are designed to respond on-line to variations in the resource usage among the nodes. This is achieved by migrating processes from one node to another, preemptively and transparently, for load-balancing and to prevent thrashing due to memory swapping. The goal is to improve the overall (cluster-wide) performance and to create a convenient multi-user, time-sharing environment for the execution of both sequential and parallel applications. The standard runtime environment of Mosix is a CC, in which the cluster-wide resources are available to each node. By disabling the automatic process migration, the user can switch the configuration into a plain CC, or even an MPP (single-user) mode. The current implementation of Mosix is designed to run on clusters of X86/Pentium based workstations, both UP’s and SMP’s that are connected by standard LANs. Possible configurations may range from a small cluster of PC’s that are connected by Ethernet, to a high performance system, with a large number of high-end, Pentium based SMP servers that are connected by a Gigabit LAN, e.g. Myrinet [6]. 2.1 The technology The Mosix technology consists of two parts: a Preemptive Process Migration (PPM) mechanism and a set of algorithms for adaptive resource sharing. Both parts are implemented at the kernel level, using a loadable module, such that the kernel interface remains unmodified. Thus they are completely transparent to the application level. The PPM can migrate any process, at any time, to any available node. Usually, migrations are based on information provided by one of the resource sharing algorithms, but users may override any automatic system-decisions and migrate their processes manually. Such a manual migration can either be initiated by the process synchronously or by an explicit request from another process of the same user (or the super-user). Manual process migration can be useful to implement a particular policy or to test different scheduling algorithms. We note that the super-user has additional privileges regarding the PPM, such as defining general policies, as well as which nodes are available for migration. Each process has a Unique Home-Node (UHN) where it was created. Normally this is the node to which the user has logged-in. In PVM this is the node where the task was spawned by the PVM daemon. The system image model of Mosix is a CC, in which every process seems to run at its UHN, and all the processes of a users’ session share the execution environment of the UHN. Processes that migrate to other (remote) nodes use local (in the remote node) resources whenever possible, but interact with the user’s environment through the UHN. For example, assume that a user launches several processes, some of which migrate away from the UHN. If the user executes “ps”, it will report the status of all the processes, including processes that are executing on remote nodes. If one of the migrated processes reads the current time, i.e. invokes `gettimeofday()`, it will get the current time at the UHN. The PPM is the main tool for the resource management algorithms. As long as the requirements for resources, such as the CPU or main memory are below certain threshold, the user’s processes are confined to the UHN. When the requirements for resources exceed some threshold levels, then some processes may be migrated to other nodes, to take advantage of available remote resources. The overall goal is to maximize the performance by efficient utilization of the network-wide resources. The granularity of the work distribution in Mosix is the process. Users can run parallel applications by initiating multiple processes in one node, then allow the system to assign these processes to the best available nodes at that time. If during the execution of the processes new resources become available, then the resource sharing algorithms are designed to utilize these new resources by possible reassignment of the processes among the nodes. The ability to assign and reassign processes is particularly important for “ease-of-use” and to provide an efficient multi-user, time-sharing execution environment. Mosix has no central control or master-slave relationship between nodes: each node can operate as an autonomous system, and it makes all its control decisions independently. This design allows a dynamic configuration, where nodes may join or leave the network with minimal disruptions. Algorithms for scalability ensure that the system runs well on large configurations as it does on small configurations. Scalability is achieved by incorporating randomness in the system control algorithms, where each node bases its decisions on partial knowledge about the state of the other nodes, and does not even attempt to determine the overall state of the cluster or any particular node. For example, in the probabilistic information dissemination algorithm [4], each node sends, at regular intervals, information about its available resources to a randomly chosen subset of other nodes. At the same time it maintains a small “window”, with the most recently arrived information. This scheme supports scaling, even information dissemination and dynamic configurations. 2.2 The resource sharing algorithms The main resource sharing algorithms of Mosix are the load-balancing and the memory ushering. The dynamic load-balancing algorithm continuously attempts to reduce the load differences between pairs of nodes, by migrating processes from higher loaded to less loaded nodes. This scheme is decentralized – all the nodes execute the same algorithms, and the reduction of the load differences is performed independently by pairs of nodes. The number of processors at each node and their speed are important factors for the load-balancing algorithm. This algorithm responds to changes in the loads of the nodes or the runtime characteristics of the processes. It prevails as long as there is no extreme shortage of other resources, e.g., free memory or empty process slots. The memory ushering (depletion prevention) algorithm is geared to place the maximal number of processes in the cluster-wide RAM, to avoid as much as possible thrashing or the swapping out of processes [2]. The algorithm is triggered when a node starts excessive paging due to shortage of free memory. In this case the algorithm overrides the load-balancing algorithm and attempts to migrate a process to a node which has sufficient free memory, even if this migration would result in an uneven load distribution. 3 Process migration Mosix supports preemptive (completely transparent) process migration (PPM). After a migration, a process continues to interact with its environment regardless of its location. To implement the PPM, the migrating process is divided into two contexts: the user context – that can be migrated, and the system context – that is UHN dependent, and may not be migrated. The user context, called the remote, contains the program code, stack, data, memory-maps and registers of the process. The remote encapsulates the process when it is running in the user level. The system context, called the deputy, contains description of the resources which the process is attached to, and a kernel-stack for the execution of system code on behalf of the process. The deputy encapsulates the process when it is running in the kernel. It holds the site-dependent part of the system context of the process, hence it must remain in the UHN of the process. While the process can migrate many times between different nodes, the deputy is never migrated. The interface between the user-context and the system context is well defined. Therefore it is possible to intercept every interaction between these contexts, and forward this interaction across the network. This is implemented at the link layer, with a special communication channel for interaction. Figure 1 shows two processes that share a UHN. In the figure, the left process is a regular Linux process while the right process is split, with its *remote* part migrated to another node. The migration time has a fixed component, for establishing a new process frame in the new (remote) site, and a linear component, proportional to the number of memory pages to be transferred. To minimize the migration overhead, only the page tables and the process’ dirty pages are transferred. In the execution of a process in Mosix, location transparency is achieved by forwarding site dependent system calls to the *deputy* at the UHN. System calls are a synchronous form of interaction between the two process contexts. All system calls that are executed by the process are intercepted by the remote site’s link layer. If the system call is site independent it is executed by the *remote* locally (at the remote site). Otherwise, the system call is forwarded to the *deputy*, which executes the system call on behalf of the process in the UHN. The *deputy* returns the result(s) back to the remote site, which then continues to execute the user’s code. Other forms of interaction between the two process contexts are signal delivery and process wakeup events, e.g. when network data arrives. These events require that the *deputy* asynchronously locate and interact with the *remote*. This location requirement is met by the communication channel between them. In a typical scenario, the kernel at the UHN informs the *deputy* of the event. The *deputy* checks whether any action needs to be taken, and if so, informs the *remote*. The *remote* monitors the communication channel for reports of asynchronous events, e.g., signals, just before resuming user-level execution. We note that this approach is robust, and is not affected even by major modifications of the kernel. It relies on almost no machine dependent features of the kernel, and thus does not hinder porting to different architectures. One drawback of the *deputy* approach is the extra overhead in the execution of system calls. Additional overhead is incurred on file and network access operations. For example, all network links (sockets) are created in the UHN, thus imposing communication overhead if the processes migrate away from the UHN. To overcome this problem we are developing “migratable sockets”, which will move with the process, and thus allow a direct link between migrated processes. Currently, this overhead can significantly be reduced by initial distribution of communicating processes to different nodes, e.g. using PVM/MPI. Should the system become imbalanced, the Mosix algorithms will reassign the processes to improve the performance [3]. 4 The implementation The porting of Mosix for Linux started by a feasibility study. We also developed an interactive kernel debugger, a pre-requisite for any project of this scope. The debugger is invoked either by a user request, or when the kernel crashes. It allows the developer to examine kernel memory, processes, stack contents, etc. It also allows to trace system calls and processes from within the kernel, and even insert break-points in the kernel code. In the main part of the project, we implemented the code to support the transparent operation of split processes, with the user-context running on a remote node, supported by the deputy, which runs in the UHN. At the same time, we wrote the communication layer that connects between the two process contexts and designed their interaction protocol. The link between the two contexts was implemented on top of a simple, but exclusive TCP/IP connection. After that, we implemented the process migration mechanism, including migration away from the UHN, back to the UHN and between two remote sites. Then, the information dissemination module was ported enabling exchange of status information among the nodes. Using this facility, the algorithms for process-assessment and automatic migration were also ported. Finally, we designed and implemented the Mosix application programming interface (API) via the /proc. 4.1 Deputy / Remote mechanisms The deputy is the representative of the remote process at the UHN. Since the entire user space memory resides at the remote node, the deputy does not hold a memory map of its own. Instead, it shares the main kernel map similarly to a kernel thread. In many kernel activities, such as the execution of system calls, it is necessary to transfer data between the user space and the kernel. This is normally done by the `copy_to_user()`, `copy_from_user()` kernel primitives. In Mosix, any kernel memory operation that involves access to user space, requires the deputy to communicate with its remote to transfer the necessary data. The overhead of the communication due to remote copy operations, which may be repeated several times within a single system call, could be quite substantial, mainly due to the network latency. In order to eliminate excessive remote copies, which are very common, we implemented a special cache that reduces the number of required interactions by prefetching as much data as possible during the initial system call request, while buffering partial data at the deputy to be returned to the remote at the end of the system call. To prevent the deletion or overriding of memory-mapped files (for demand-paging) in the absence of a memory map, the deputy holds a special table of such files that are mapped to the remote memory. The user registers of migrated processes are normally under the responsibility of the remote context. However, each register or combination of registers, may become temporarily owned for manipulation by the deputy. Remote (guest) processes are not accessible to the other processes that run at the same node (locally or originated from other nodes) - and vice versa. They do not belong to any particular user (on the remote node, where they run) nor can they be sent signals or otherwise manipulated by local processes. Their memory cannot be accessed and they can only be forced, by the local system administrator, to migrate out. A process may need to perform some Mosix functions while logically stopped or sleeping. Such processes would run Mosix functions “in their sleep”, then resume sleeping, unless the event they were waiting for has meanwhile occurred. An example is process migration, possibly done while the process is sleeping. For this purpose, Mosix maintains a logical state, describing how other processes should see the process, as opposed to its immediate state. 4.2 Migration constraints Certain functions of the Linux kernel are not compatible with process context division. Some obvious examples are direct manipulations of I/O devices, e.g., direct access to privileged bus-I/O instructions, or direct access to device memory. Other examples include writable shared memory and real time scheduling. The last case is not allowed because one can not guarantee it while migrating, as well as being unfair towards processes of other nodes. A process that uses any of the above is automatically confined to its UHN. If the process has already been migrated, it is first migrated back to the UHN. 4.3 Information collection Statistics about a process’ behavior are collected regularly, such as at every system call and every time the process accesses user data. This information is used to assess whether the process should be migrated from the UHN. These statistics decay in time, to adjust for processes that change their execution profile. They are also cleared completely on the “execve()” system call, since the process is likely to change its nature. Each process has some control over the collection and decay of its statistics. For instance, a process may complete a stage knowing that its characteristics are about to change, or it may cyclically alternate between a combination of computation and I/O. 4.4 The Mosix API The Mosix API has been traditionally implemented via a set of reserved system calls, that were used to configure, query and operate Mosix. In line with the Linux convention, we modified the API to be interfaced via the /proc file system. This also prevents possible binary incompatibilities of user programs between different Linux versions. The API was implemented by extending the Linux /proc file system tree with a new directory /proc/mosix. The calls to Mosix via /proc include: synchronous and asynchronous migration requests; locking a process against automatic migrations; finding where the process currently runs; finding about migration constrains; system setup and administration; controlling statistic collection and decay; information about available resources on all configured nodes, and information about remote processes. 5 Conclusions Mosix brings the new dimension of scaling to cluster computing with Linux. It allows the construction of a high-performance, scalable CC from commodity components, where scaling does not introduce any performance overhead. The main advantage of Mosix over other CC systems is its ability to respond at run-time to unpredictable and irregular resource requirements by many users. The most noticeable properties of executing applications on Mosix are its adaptive resource distribution policy, the symmetry and flexibility of its configuration. The combined effect of these properties implies that users do not have to know the current state of the resource usage of the various nodes, or even their number. Parallel applications can be executed by allowing Mosix to assign and reassign the processes to the best possible nodes, almost like an SMP. The Mosix R&D project is expanding in several directions. We already completed the design of *migratable sockets*, which will reduce the inter process communication overhead. A similar optimization is *migratable temporary files*, which will allow a *remote* process, e.g., a compiler, to create temporary files in the remote node. The general concept of these optimizations is to migrate more resources with the process, to reduce remote access overhead. In another project, we are developing new competitive algorithms for adaptive resource management that can handle different kinds of resources, e.g., CPU, memory, IPC and I/O [1]. We are also researching algorithms for network RAM, in which a large process can utilize available memory in several nodes. The idea is to spread the process’s data among many nodes, and rather migrate the (usually small) process to the data than bring the data to the process. In the future, we consider extending Mosix to other platforms, e.g., DEC’s Alpha or SUN’s Sparc. Details about the current state of Mosix are available at URL http://www.mosix.cs.huji.ac.il. **References** [1] Y. Amir, B. Averbuch, A. Barak, R.S. Borgstrom, and A. Keren. An Opportunity Cost Approach for Job Assignment and Reassignment in a Scalable Computing Cluster. In *Proc. PDCS ’98*, Oct. 1998. [2] A. Barak and A. Braverman. Memory Ushering in a Scalable Computing Cluster. *Journal of Microprocessors and Microsystems*, 22(3-4), Aug. 1998. [3] A. Barak, A. Braverman, I. Gilderman, and O. Laden. Performance of PVM with the MOSIX Preemptive Process Migration. In *Proc. Seventh Israeli Conf. on Computer Systems and Software Engineering*, pages 38–45, June 1996. [4] A. Barak, S. Guday, and R.G. Wheeler. The MOSIX Distributed Operating System, Load Balancing for UNIX. In *Lecture Notes in Computer Science, Vol. 672*. Springer-Verlag, 1993. [5] A. Barak and O. La’adan. The MOSIX Multicomputer Operating System for High Performance Cluster Computing. *Journal of Future Generation Computer Systems*, 13(4-5):361–372, March 1998. [6] N.J. Boden, D. Cohen, R.E. Felderman, A.K. Kulawik, C.L. Seitz, J.N. Seizovic, and W-K. Su. Myrinet: A Gigabit-per-Second Local Area Network. *IEEE Micro*, 15(1):29–36, Feb. 1995. [7] Platform Computing Corp. *LSF Suite 3.2*. 1998. [8] A. Geist, A. Beguelin, J. Dongarra, W. Jiang, R. Manchek, and V. Sunderam. *PVM - Parallel Virtual Machine*. MIT Press, Cambridge, MA, 1994. [9] W. Gropp, E. Lust, and A. Skjellum. *Using MPI*. MIT Press, Cambridge, MA, 1994. [10] Red Hat. *Extreme Linux*. 1998.
Ray-tracing 3D dust radiative transfer with DART-Ray: code upgrade and public release Giovanni Natale\textsuperscript{1}, Cristina C. Popescu\textsuperscript{1,2,3}, Richard J. Tuffs\textsuperscript{3}, Adam J. Clarke\textsuperscript{1}, Victor P. Debattista\textsuperscript{1}, Jörg Fischera\textsuperscript{3}, Stefano Pasetto\textsuperscript{4}, Mark Rushton\textsuperscript{2}, and Jordan J. Thirwall\textsuperscript{1} \textsuperscript{1} University of Central Lancashire, Jeremiah Horrocks Institute, Preston, PR1 2HE, UK e-mail: firstname.lastname@example.org \textsuperscript{2} The Astronomical Institute of the Romanian Academy, Str. Cutitul de Argint 5, Bucharest 052034, Romania \textsuperscript{3} Max Planck Institute für Kernphysik, Saupfercheckweg 1, 69117 Heidelberg, Germany \textsuperscript{4} The Carnegie Observatories – Carnegie Institution for Science, 813 Santa Barbara St, Pasadena, CA 91101, USA Received 10 August 2017 / Accepted 7 September 2017 ABSTRACT We present an extensively updated version of the purely ray-tracing 3D dust radiation transfer code DART-Ray. The new version includes five major upgrades: 1) a series of optimizations for the ray-angular density and the scattered radiation source function; 2) the implementation of several data and task parallelizations using hybrid MPI+OpenMP schemes; 3) the inclusion of dust self-heating; 4) the ability to produce surface brightness maps for observers within the models in HEALPix format; 5) the possibility to set the expected numerical accuracy already at the start of the calculation. We tested the updated code with benchmark models where the dust self-heating is not negligible. Furthermore, we performed a study of the extent of the source influence volumes, using galaxy models, which are critical in determining the efficiency of the DART-Ray algorithm. The new code is publicly available, documented for both users and developers, and accompanied by several programmes to create input grids for different model geometries and to import the results of $N$-body and SPH simulations. These programmes can be easily adapted to different input geometries, and for different dust models or stellar emission libraries. Key words. radiative transfer – scattering – methods: numerical – dust, extinction – infrared: ISM 1. Introduction The modelling of observations of astrophysical objects in the wavelength range from the UV to the submm is a challenging task. For a vast variety of scales, from proto-planetary systems to galaxies, the emission in this wavelength range is dominated either by primary sources of radiation (e.g. stars or active galactic nuclei, AGN), predominant in the UV and optical, or by the re-emission of absorbed photons by interstellar dust, predominant in the mid- and far-infrared. Usually, the near-infrared range is a region of transition between the two kinds of emissions. The observed emission produced by the primary sources and the dust is mutually affected. On the one hand, the dust dims and scatters the light generated by the stars or AGNs. On the other hand, the light from the primary sources heats the dust, determining its temperature and thus its emission spectra. Furthermore, although most astrophysical objects are optically thin at long infrared wavelengths, the dust emission produced at one location can also be absorbed and scattered by the dust located elsewhere, a process often referred to as dust “self-heating”. Performing dust radiative transfer (RT) calculations is the essential step to reproduce the observations in a (as much as possible) self-consistent way. The problem is computationally challenging because of its non-locality (in the spatial, angular direction, and wavelength dimensions) and non-linearity (e.g. the dust emission spectra depends non linearly on the absorbed luminosity). Furthermore, the presence of six independent variables (three spatial coordinates, two angular coordinates and the wavelength) makes it very challenging to handle the large memory required if the 3D dust radiation transfer equation has to be solved directly (see Steinacker et al. 2013, for a recent review). For all the above reasons, the vast majority of the dust radiative transfer codes adopt a Monte-Carlo (MC) approach (e.g. see lists of codes in Steinacker et al. 2013; Pascucci et al. 2004; Pinte et al. 2009; and Gordon et al. 2017, hereafter G17), which is an elegant, flexible way to follow the propagation of light within dusty objects and it heavily reduces the memory requirements since there is no need to store the intensity as a function of the angular direction at each spatial position. The basic Monte-Carlo approach is not efficient in producing images at specific line-of-sights but this problem can be solved by combining it with a ray-tracing procedure called peel-off (Yusef-Zadeh et al. 1984). Other acceleration techniques to maximize the use of photon particles are available (Steinacker et al. 2013). Despite all the known technical difficulties, in the last few years we have been developing a 3D dust RT code that is purely ray-tracing, that is, it does not make any use of MC techniques but is simply based on the calculation of the radiation intensity variation along numerous directions chosen deterministically by the code algorithm. The code, named DART-Ray, and its basic algorithm have been introduced in Natale et al. (2014, hereafter NA14). As stated in that article, the main motivation to develop this code is having a specific tool for the calculation of the radiation field energy density (RFED) throughout any region of the RT model under consideration. To this goal, our group already made extensive use of a ray-tracing RT code based on the scattering intensity approximation of Kylafis & Bahcall (1987; see e.g. Popescu et al. 2000, 2011). However, this code can be applied... only to axisymmetric galaxy models, while DART-Ray is able to handle any geometry. RFED are obviously also calculated by MC codes, but the main focus of MC codes is the production of surface brightness images that might or might not require the RFED being accurately calculated at all positions and all wavelengths. In particular, our focus on the RFED is due to the importance of this quantity in other fields of astrophysics, such as high-energy astrophysics, where it is necessary to calculate the radiation due to inverse-Compton of cosmic rays interacting with the interstellar radiation field produced by stellar and dust emission. Furthermore, the DART-Ray algorithm is not a brute-force ray-tracing algorithm. It takes advantage of a so far not well-studied property of the radiation sources within RT models, that is, that these sources often do not contribute significantly to the RFED everywhere but only within a fraction of it called the source influence volume. Although being essentially a cell-to-cell radiation transfer algorithm, the gain in efficiency of DART-Ray with respect to a brute force algorithm comes from its method to estimate the extent of the source influence volumes and perform radiation transfer calculations only within them. The extent of these volumes in astrophysical objects and the possible advantages that can be exploited in radiation transfer codes have never been clarified. Intuitively, in dusty objects the extent of this volume could be quite reduced relative to the size of the models, especially for the scattered light sources, which are low intensity sources compared to the sources actually producing radiation, such as stars and dust thermal emission. DART-Ray allows one to examine the extent of the source influence volumes and thus verify when they are small relative to the entire model size. Finally, handling 3D dust radiative transfer in a manner different to those of the widely used MC techniques provides a useful test for the reliability of scientific results obtained by MC codes. In principle, an agreement between two or more MC codes could be due to the adoption of the same numerical method, but this interpretation can be discarded when a different RT solver obtains the same result. This kind of comparisons is already underway with the TRUST benchmark project in which DART-Ray is participating (see G17). The code presented in NA14 was a good first step in the development of a mature code, but there was some scope for improvement to ameliorate several limitations: firstly it could only be executed one wavelength at a time, neglecting dust self-heating, which requires multi-wavelength runs; secondly parallelization was implemented only for shared memory machines; thirdly the inaccuracy from the blocking of the rays could not be set at the start of the calculation and could only be measured by re-running the model with a different value for a threshold parameter ($f_{\text{th}}$, see Na14 or below); fourthly stochastically heated dust emission was excluded from the calculation (this was added in Natale et al. 2015). Furthermore, a substantial reduction in the execution time could have been achieved through the implementation of a more efficient algorithm for the optimization of the ray angular density. In this paper, we present a new version of the code (hereafter DART-Ray V2), which is a substantial improvement to the one presented previously. The new code is publicly available and documented for both users and developers\footnote{\url{https://github.com/gnatale/DART-Ray}}. As well as addressing the issues highlighted above, we have added new features, such as the ability to create “internal observer” maps viewed from within the RT models in HEALPix format (which can be used for Milky Way studies) and at arbitrary lines of sight, without repeating the radiation transfer calculation. This latter feature can be used, for example, to create animations for the presentation of the results. The structure of the paper is the following. In Sect. 2 we briefly summarize the radiation transfer algorithm used in DART-Ray. In Sect. 3 we describe the numerous updates to the code. In Sect. 4 we show the comparison of the code with benchmark solutions including dust self-heating. In Sect. 5 we present a study of the extent of the source influence volumes for radiation sources within different galaxy models. In Sect. 6 we discuss the advantages and disadvantages of the DART-Ray algorithm and in Sect. 7 its possible further improvements. 2. The DART-Ray dust radiation transfer algorithm The general strategy of the RT algorithm of DART-Ray V2 is the same as the one presented in NA14. However, there are many differences regarding the implementation and the newly added capabilities (see Sect. 3). Here we make a brief summary of the RT algorithm, highlighting the main steps. We encourage users of the code and readers interested in more specific details to read the user guide and the code documentation on the code webpage as well as Sects. 2 and 3 of NA14 for further clarifications on specific points. In DART-Ray, a RT model is subdivided into an adaptive 3D Cartesian grid of cells, each with a given input value of stellar volume emissivity $j_\lambda(\mathbf{r})$ (luminosity per unit volume per unit frequency and per unit solid angle) and dust optical depth per unit length $k_\lambda \rho_d(\mathbf{r})$ (with $k_\lambda$ the extinction coefficient and $\rho_d$ the dust density). The albedo $\omega_\lambda$ and the anisotropy parameter of the Heney-Greenstein phase function $g_\lambda$ are determined by the assumed dust model. Given these input quantities, the code calculates: - the RFED $U_A(\mathbf{r})$ for each cell; - the scattered luminosity source function $j_{\lambda,\text{sca}}(\mathbf{r},\theta,\phi)$ which contains the scattered radiation luminosity per unit volume and per unit solid angle for each dusty cell. In general, the scattered luminosity is not isotropically distributed and thus depends on the angular direction $(\theta,\phi)$; - the dust emission source function $j_{\lambda,d}(\mathbf{r})$ which contains the luminosity per unit volume and per unit solid angle produced in each cell containing dust; - the specific intensity $I_{\lambda,\text{obs}}(\mathbf{r},\theta,\phi)$ of the radiation produced by each cell/point source, and reaching the observer located either far away or within the RT model. It is derived from the source terms $j_\lambda(\mathbf{r})$, $j_{\lambda,\text{sca}}(\mathbf{r},\theta,\phi)$ and $j_{\lambda,d}(\mathbf{r})$, and the optical depth between the cell/point source and the observer. It can be used to calculate surface brightness maps at the position of the observer. The code performs first the RT calculation for the stellar emission and subsequently that for the dust emission (the latter added in this code version, see Sect. 3.3). In both cases, the RT algorithm is subdivided in three steps: 1) the determination of a lower limit $U_{A,\text{LL}}(\mathbf{r})$ to the RFED distribution $U_A(\mathbf{r})$; 2) the processing of radiation coming directly from radiation sources; 3) the processing of radiation scattered by dust. In all these steps, the DART-Ray algorithm considers one radiation source at a time (that is, either an “emitting cell” whose stellar or dust volume emissivity is not zero or a point source). For each source, it calculates the contributions of the radiation emitted by the source to the RFED within a certain volume surrounding it. In steps 2 and 3 the contributions to \( j_{i,\text{sca}}(r, \theta, \phi) \) for each cell of this volume are also calculated. The value of these contributions is derived after each ray-cell intersection (see Sect. 3.2 in NA14). In the new code version, the ray tracing from a radiation source within the surrounding volume involves a ray angular density optimization procedure described in Sect. 3.1.1. During step 1, the volume considered around each radiation source has a fixed extent chosen by the user (typically 10–20% of the entire model size). In this way, a lower limit of the RFED distribution \( U_{A,\text{LL}}(r) \) is derived because the contributions in the regions beyond these volumes are not taken into account. In step 2, the ray-tracing calculation is performed once again from the beginning but this time the rays originating from the radiation sources are blocked if the ray contribution \( \delta U_A \) to the local RFED is “negligible” at all wavelengths, that is, when \[ \delta U_A(r) < f_U U_{A,\text{LL}}(r), \] where \( f_U \) is a threshold parameter chosen indirectly by the user depending on the desired numerical accuracy (see Sect. 3.5). Finally, during step 3 the scattered radiation stored within the dusty cells is processed. In opposition to step 2, the radiation produced by emitting cells is typically direction-dependent, since the assumed scattering phase function (Henyey-Greenstein) is in general not isotropic\(^2\). Nonetheless, apart from few technical differences, the calculation during this step proceeds essentially as in step 2\(^3\). Since scattered radiation can be scattered multiple times, several scattering iterations are needed. These iterations are stopped when the remaining scattered radiation luminosity waiting to be processed is only a small fraction \( f_L \) of the total scattered luminosity of the model as found at the end of step 2. The code performs firstly the radiation transfer calculations only for the stellar emission. Then, it starts the calculation for the dust emission. The dust emission spectra produced by each dusty cell can be derived from the luminosity absorbed by the dust (which depends on \( U_i(r) \), the dust density and the dust opacity coefficients). The radiation emitted by dust undergoes the same type of propagation as for the stellar emission with the difference that the extra-radiation absorbed in this process affects the dust temperature and thus its emission. Therefore, since dust emission and absorption are coupled, multiple iterations of the entire radiation transfer procedure described in this section are performed until the dust emission spectra have converged at all positions (see Sect. 3.3). Once \( j_A(\theta, \phi) \), \( j_{i,\text{sca}}(\theta, \phi) \) and \( j_{i,d} \) are known, one can calculate the specific intensity \( I_{i,0} \) of the radiation departing from each source along any angular direction (see Eq. (7) of NA14). The specific brightness for the radiation arriving to the observer is then simply \( I_{i,\text{obs}} = I_{i,0} e^{-\tau_i} \) with \( \tau_i \) the optical depth between the source and the observer position. The code calculates \( I_{i,\text{obs}} \) for all cells and point sources and then use volume rendering techniques to produce surface brightness maps. We note that, if the source functions are saved, \( I_{i,\text{obs}}(r, \theta, \phi) \) can be calculated for arbitrary observer positions without repeating the entire RT calculation. --- \(^2\) Scattering is particularly anisotropic in the UV and optical wavelength regimes, while it is almost isotropic in the infrared (Draine 2003). \(^3\) One important difference is that the value of \( U_{A,\text{LL}}(x) \) is updated firstly with the RFED distribution found at the end of step 2 and then with that found at the end of each scattering iteration. ### 3. Update descriptions #### 3.1. Optimizations Compared to NA14, we implemented two main changes affecting the code speed and memory requirement. These are an improved algorithm for the optimization of the ray angular density and the implementation of wavelength-dependent angular resolution for the scattering source function \( j_{i,\text{sca}} \). #### 3.1.1. Ray angular density optimization DART-Ray performs ray-tracing operations from each radiation source (either an emitting cell or a point source) throughout a 3D Cartesian adaptive grid. When a source contributes significantly to the RFED within a grid cell, it is necessary that at least several rays, originating from the source, intersect the cell in order to achieve good numerical accuracy for the RFED and the source functions \( j_{i,\text{sca}} \) and \( j_{i,d} \) at that grid position. In this way, one also avoids missing cells at similar distances. Unfortunately, the extent of the source influence volume cannot be known in advance. Therefore, it is not possible to set a sufficiently high ray angular density (the number of rays per unit solid angle) right at the beginning of the ray propagation, so that all the cells within the source influence volume are properly intersected. Instead, the ray angular density has to be derived iteratively while the rays propagate through the model. The optimal ray angular density is also not necessarily uniform over the entire solid angle, as seen from the radiation source, but it can well be direction-dependent. Furthermore, the variable cell size of the adaptive 3D grid of emissivity and opacity can make the optimal angular ray density vary with the distance from the radiation source. The directions along which the rays are cast are those defined by the lines passing through the source position and the centres of the spherical pixels of a concentric sphere, subdivided according to the HEALPix sphere pixelation scheme (Górski et al. 2005). The advantage of using HEALPix is that the angular resolution of the sphere pixelation (defined as a quad-tree) can be varied easily and there are fast routines available for spherical pixel searches (e.g. to obtain spherical angles from the pixel number and vice versa). The basic idea is to start the RT calculation by using a low initial HEALPix resolution. While a ray is propagating throughout the model, the code can vary the ray angular density by moving from one HEALPix resolution level to the immediately higher or lower as described below. Because of the quad-tree structure of HEALPix, any change in HEALPix resolution corresponds to a factor 4 in the variation of the ray angular density. Specifically, the ray angular density has to be increased when the following two conditions are met: 1. the ray beam size is larger than the maximum allowed size, that is: \[ \Omega_{\text{HP,EM}} > \frac{\Omega_{\text{INT}}}{N_{\text{rays}}}, \] where \( \Omega_{\text{HP,EM}} \) is the solid angle of the beam associated with a ray, \( \Omega_{\text{INT}} \) is the solid angle subtended by the last intersected cell and \( N_{\text{rays}} \) is the minimum number of rays that has to intersect a cell (input-defined); 2. the ray has either not reached the boundary of the user-defined region, during the calculation of \( U_{A,\text{LL}}(r) \), or it does not contribute significantly to the RFED of the last intersected cell, during the processing of direct and scattered radiation. Conversely, rays can be merged when the ray angular density is too high. That is, when: \[ \Omega_{\text{HP,EM}} < \frac{\Omega_{\text{INT}}}{N_{\text{rays}}^{\text{max}}}, \] where \(N_{\text{rays}}^{\text{max}}\) is the user-defined maximum number of rays allowed to cross a cell. The previous version of DART-Ray already contained an algorithm for the ray angular density optimization, but it had the problem that many contributions to the RFED of cells already crossed by rays had to be recalculated several times (see NA14 for details). Instead, in DART-Ray V2 we implemented an optimization strategy for the ray angular density, in which rays can be split and merged along the path they are following, and it avoids repeating the ray-tracing calculations for cells already intersected with a sufficient number of rays. The method we implemented is similar to the algorithm of Abel & Wandelt (2002), but with several technical differences. This method is described by the flowchart in Fig. 1. At the beginning, the code selects a ray and follows its propagation through the RT model. At each ray-cell intersection, it checks whether the ray-beam satisfies any of the conditions expressed in Eqs. (2) or (3). If not, it adds the ray contributions to the RFED and to the scattering source function. Unless the ray has already reached the model border, the ray propagation continues to the next cell intersection. If the ray beam is found too large after any of these intersections (that is, it satisfies Eq. (2)), the code checks if the ray still carries a significant contribution to the RFED (see Eq. (1)). If so, it adds the current ray to the “high” list, the list of rays to be split. Otherwise the ray further propagation is ignored. Instead, if the ray beam is found too small (according to Eq. (3)), the ray is added to the “low” list, the list of rays that can potentially be merged. Once all rays within an HEALPix sector have been processed for the current HEALPix angular resolution, DART-Ray checks whether there are rays in the high ray list. If so, it proceeds with the ray tracing at immediately higher HEALPix resolution. That is, for each ray in the high ray list, four child rays are generated with directions corresponding to the HEALPix directions within the spherical pixel associated with the parent ray. The ray tracing calculation for these child rays starts directly from the same distance \(d_{\text{ray}}\) from the source which has been already crossed by the parent ray. After all rays in the high ray list have been processed, the code looks for rays that can potentially be merged among those in the low ray list. In order to be merged, the directions of four rays in the list should be contained within the same HEALPix spherical pixel at the immediately lower angular resolution, and these four rays should have been blocked after crossing the same grid plane. If so, the code merges them into a single ray with specific intensity equal to the average intensity of the four merged rays. After that, the code starts the propagation of the newly created rays from the average distance crossed by corresponding merged rays. The code proceeds with the propagation of all rays from the high and low lists iteratively until there are no more rays in both lists. ### 3.1.2. Wavelength-dependent angular resolution for the scattering source function The scattering source function \(j_{L,\text{sca}}(r, \theta, \phi)\) is the computed quantity requiring more memory in the DART-Ray code, since it depends on six independent variables \((\lambda, r, \theta, \phi)\). By sampling appropriately each dimension, its values can be stored in a big array of size typically in the range \(1 \sim 100\) Gbytes. Furthermore, the algorithm needs one more array of the same size to store the scattering luminosity to be processed within each scattering iteration, separately from the total scattered luminosity stored in \(j_{\lambda,\text{sca}}(r, \theta, \phi)\). As big as it is, storing the scattering source function is still cheaper in terms of memory requirements than solving the radiative transfer equation directly for the specific intensity \(I_\lambda(r, \theta, \phi)\). This is because \(I_\lambda(r, \theta, \phi)\) can present unpredictably rapid angular variations at each spatial point \(r\) which are determined by the radiation sources and dust distribution geometry as well as the assumed scattering phase function \(\Phi_\lambda(n, n')\) (dependent on the incoming light direction \(n'\) and the scattering light direction \(n\))\(^4\). The latter determines the angular re-distribution of the scattered radiation after each ray-cell intersection. Instead the rapidity of the angular variations for \(j_{\lambda,\text{sca}}\) is determined only by the shape of \(\Phi_\lambda\), typically modelled as a Henyey-Greenstein profile (Henyey & Greenstein 1941): \[ \Phi_\lambda(n, n') = \frac{1 - g_\lambda^2}{4\pi[1 + g_\lambda^2 - 2g_\lambda \cos \theta]^{3/2}}, \] with \(\theta\) being the angle between \(n'\) and \(n\), and the anisotropy parameter \(g_\lambda\) determining the angular width of the \(\Phi_\lambda(n, n')\) profile. For typical interstellar dust models, this profile is quite sharp at UV wavelengths, but it gradually becomes flatter going towards the NIR and then almost completely flat in the FIR. Therefore, the number of angular points needed to sample \(j_{\lambda,\text{sca}}(r, \theta, \phi)\) properly has to be quite high at shorter wavelengths, while relatively few points are sufficient in the FIR where scattering is essentially isotropic. This property of \(j_{\lambda,\text{sca}}(r, \theta, \phi)\) allows a significant reduction in the memory requirement if the storage of \(I_\lambda(r, \theta, \phi)\) is not needed, as in DART-Ray. The sampling points for the angular directions of \(j_{\lambda,\text{sca}}(r, \theta, \phi)\) are those of a discretized HEALPix sphere with a total number of pixels equal to \(N_{\text{pix}} = 12N_{\text{side}}^2\) with \(N_{\text{side}} = 2^{k_{\text{HP}}}\) and \(k_{\text{HP}}\) a positive integer value (see Górski et al. 2005). In DART-Ray we implemented the following formula to derive an appropriate \(k_{\text{HP}}\) for the scattering source function at each wavelength: \[ k_{\lambda,\text{HP}} = \frac{1}{2} \log_2 \left( \frac{4\pi}{12g_\lambda^2\theta_{\lambda,\text{min}}} \right), \] with \(\theta_{\lambda,\text{min}}\), the pixel angular size for the required minimum angular resolution, given by: \[ \theta_{\lambda,\text{min}} = \frac{FWHM[\Phi_\lambda - \Phi_\lambda(\pi)]}{n_{FWHM}}, \] with \(FWHM[\Phi_\lambda - \Phi_\lambda(\pi)]\) the Full Width Half Maximum of \(\Phi_\lambda\) minus its “background” value at \(\theta = \pi\) (in turn depending on \(g_\lambda\)) and \(n_{FWHM}\) the minimum number of pixels within the \(FWHM\)\(^5\). We found that by choosing \(n_{FWHM} = 5\) a good accuracy is reached for the benchmark models examined in Sect. 4. We note that the values of \(k_{\lambda,\text{HP}}\) have to be integers, so the result of formula (5) is approximated to its integer part. A maximum allowed value \(k_{\lambda,\text{HP}} = 2 \sim 3\) has to be set to avoid very high memory requirements for very narrow \(\Phi_\lambda\) profiles at short UV wavelengths. Examples of this sampling can be seen in Fig. 2 for values of \(g_\lambda\) in the range 0.1 – 0.6 (approximately the value range typical of the NIR to UV wavelength range). The figure shows the Henyey-Greenstein functions plotted over the entire sphere using Mollweide projection, together with the contours of the HEALPix pixels for the derived values of \(k_{\lambda,\text{HP}}\). Our implementation guarantees that at least several points are sampling the peak of the Henyey-Greenstein profile, which convolves any scattered light contribution added to \(j_{\lambda,\text{sca}}(r, \theta, \phi)\). The variable angular resolution for the scattering source function allows a considerable reduction in memory. For example, in the TRUST benchmark slab model (see G17) the “BASIC” lambda grid contains 31 wavelengths from the UV until 60 \(\mu m\), the range we used for the stellar emission RT. By assuming \(k_{\lambda,\text{HP}} = 2\) at all wavelengths, and given about 700,000 3D grid points, the memory requirement for \(j_{\lambda,\text{sca}}(r, \theta, \phi)\) is about 33 Gbytes. By using the variable angular resolution described above, this is reduced to about 12 Gbytes. --- \(^4\) This problem affects all methods which determine \(I_\lambda(r, \theta, \phi)\) directly, including finite-differencing, discrete ordinates and other ray-tracing methods. \(^5\) Formula (5) can be found by inverting the following equivalence between the approximate pixel solid angle \(\theta_{\lambda,\text{min}}^2\) required for the minimum angular resolution and the exact pixel solid angle, equal to the ratio of the total solid angle divided by the number of spherical pixels: \[ \theta_{\lambda,\text{min}}^2 \approx \frac{\pi}{N_{\text{pix}}}. \] 3.2. Multi-wavelength RT Implementing multi-wavelength calculations in a purely ray-tracing code is harder than for MC codes because of the high memory requirement for both the specific intensity $I_\lambda(r, \theta, \phi)$ and the scattering source function $j_{\lambda,\text{sca}}$. However, this task becomes easier once the memory requirements are reduced when $I_\lambda(r, \theta, \phi)$ does not have to be stored in memory and by using a variable angular resolution for $j_{\lambda,\text{sca}}$. Because of this latter optimization, DART-Ray V2 can perform multi-wavelength RT calculations without exceeding the RAM memory of modern computer cluster nodes typically of the order of 100 Gbytes. This addition allows the inclusion of dust self-heating (see Sect. 3.3) which, being non local in wavelength, cannot be easily handled using a succession of monochromatic RT calculations. Furthermore, many ray-tracing steps are exactly the same for all wavelengths and they are not repeated in multi-wavelength runs. Since rays carry multi-wavelength intensities, one has to check that the ray blocking criterion (Eq. (1)) is fulfilled at all wavelengths. During the ray propagation, this criterion may only be satisfied at some wavelengths. In this case, the code still propagates the ray in the current direction but it does not add the contributions to the RFED and $j_{\lambda,\text{sca}}$ at the wavelengths for which the intensity has become negligible. Since updating $j_{\lambda,\text{sca}}$ is demanding computationally, this helps to reduce further the calculation time. 3.3. Dust self-heating The previous version of DART-Ray assumed that the dust emission is always optically thin. Thus, it did not consider the absorption of dust emission at other locations (called dust self-heating) as well as the scattering of dust emission. This assumption is not correct for models which are optically thick in the infrared range. This was the main source of disagreement at infrared wavelengths between DART-Ray and the other codes in the TRUST slab benchmark project for the most optically thick models (see G17). In this section, we explain the implementation of the dust self-heating in DART-Ray V2. The result comparison for optically thick models is shown in Sect. 4. The dust emission spectra at each position is determined by the RFED (or alternatively the average radiation field intensity), the dust density and the dust absorption coefficient (see e.g. Popescu et al. 2011; Steinacker et al. 2013). In DART-Ray this can be calculated assuming either equilibrium between the dust and the radiation field or by deriving the full stochastically heated dust emission spectra (see Natale et al. 2015; Camps et al. 2015, and the code user guide). The dust emission RT calculation is performed after the stellar emission RT is completed. The main difference between the two calculations is that for the former the emission source spectra depend on the RFED. Therefore, while the stellar emission RT run can be performed only once, following the three steps procedure described in Sect. 2, the dust emission RT requires in principle several iterations of that procedure until the dust emission and the infrared RFED both converge. To handle these dust self-heating iterations, we implemented the following procedure: 1. the dust emission spectra are calculated taking into account only dust heating from absorbed stellar emission; 2. a first RT calculation for the dust emission is performed following the RT algorithm described in Sect. 2; 3. the dust emission spectra are recalculated taking into account the dust heating due to both absorbed stellar emission and the absorbed dust emission; 4. the difference between the dust emission spectra just calculated $j_d(r)$ and the ones calculated at the end of the previous dust self-heating iteration $\tilde{j}_d^{\text{prev}}(r)$ is evaluated: \[ \Delta j_d(r) = j_d(r) - \tilde{j}_d^{\text{prev}}(r); \] 5. another dust radiative transfer calculation is performed during which only the dust emission luminosity stored in $\Delta j_d(r)$ is processed. The RT algorithm is performed skipping the calculation of the RFED lower limit $U_{\lambda,\text{LL}}$, which is set equal to the RFED calculated in the previous iteration. Also, the RFED $U_\lambda$ and $j_{\lambda,\text{sca}}$ are initialized with the corresponding values found in the previous iteration; 6. steps 3–5 are repeated until $\Delta j_d(r)/\tilde{j}_d^{\text{prev}}(r) < 1\%$ at all positions and wavelengths. As one can see, the dust emission RT iterations are performed without processing the same dust emission luminosity more than one time. For moderately optically thick models, $\Delta j_d(r)$ tends to be very small compared to $j_d(r)$ already after the first self-heating iteration. So, the iterations that follow proceed much faster compared to the first. We tested the validity of this approach in Sect. 4. 3.4. Parallelization 3D dust radiation transfer is computationally very expensive independently of the algorithm used. Therefore, most of the more advanced dust radiative transfer codes use parallelization to reduce the time needed for the calculations. Task parallelization is straightforward to implement because 3D dust radiation transfer is largely an additive problem. For a given RT model, all the quantities to derive, with the exception of the dust emission source function, are equal to the sum of contributions provided by the radiation from the single sources. Therefore, task parallelization is done by distributing the processing of the radiation sources (or photon packages for MC codes) between different CPUs. On shared memory machines, as the single nodes of a typical computer cluster, one can easily parallelize the loops over the radiation sources using OpenMP. Unlike MPI, OpenMP allows multiple CPUs to operate on shared arrays. In this way, there is no need for replicating any array or distributing arrays among different processes. Then, to take advantage of multiple nodes simultaneously, a hybrid OpenMP+MPI parallelization scheme is a natural choice, since one can use OpenMP for parallelization within a single node and MPI to handle communication between nodes. With multiple nodes it is possible to increase substantially the number of CPUs, and so in principle reduce the total computational time. However, in practice, some overheads that are introduced by the time needed for nodes to communicate and to process the exchanged information. These overheads can become significant when data parallelization among nodes is implemented. In DART-Ray, the vast majority of the memory consumption is due to the scattered luminosity source function. For this reason, we did not implement any data parallelization for the other arrays (e.g. the 3D spatial grid coordinates) which are all replicated in all nodes. Instead, we have implemented two data parallelization choices for the scattered luminosity source function: a “communication” mode and a “no-communication” mode. In the communication mode, the scattered luminosity source function is distributed among the node memories such that each node contains the scattered source function for different sets of wavelengths. The communication between nodes is needed in two cases: firstly to add the $\delta j_{l,\text{sca}}$ contribution at all wavelengths after each ray-cell intersection; secondly, during the scattering iterations, the values at all wavelengths of $j_{l,\text{sca}}$ for each source are needed by the same node that has to process it. In the first case, in order to minimize data exchange, instead of passing the $\delta j_{l,\text{sca}}$ contribution to the scattering source function in each node, only the total scattered luminosities (integrated over all angular directions) and the ray directions are passed to the corresponding node. Then, this information is processed to calculate the angular distribution of scattered luminosity to be added to $j_{l,\text{sca}}$ for a certain dusty cell. Furthermore, to minimize communication, large packets of data, containing the scattered luminosity contributions due to many ray-cell intersections, are collected within each node and then exchanged between the nodes (see code documentation for more details). Despite all efforts we put to minimize data exchange and reduce the processing time of the received data, the overheads in the communication mode can still be substantial. Therefore, we also implemented a simpler no-communication mode where all arrays are replicated in each node, including the scattering source function. In this mode, data communication is performed only at the end of the radiation source loops to sum up the arrays calculated separately by all nodes. An example of the speed-up performances of the communication and no communication parallelization modes is shown in Fig. 3. In this figure, we show the wall clock speed-up of the calculation, compared to the serial execution, for the $N$-body and SPH galaxy model example contained in the DART-Ray current release (see DART-Ray User Guide). As expected, the no communication mode scales much better with the number of CPUs than the communication mode. However, it is also much more expensive in terms of memory. Nonetheless, with the typical RAM memory of computer cluster nodes of the order of hundreds of Gbytes and thanks to our implementation of the wavelength-dependent angular resolution for the scattering source function, it is possible to use this parallelization mode in the majority of cases. We recommend that users of the code use this mode, unless the memory requirements are so high that data distribution is unavoidable. ### 3.5. Control of the inaccuracy due to ray blocking There are several factors affecting the numerical accuracy of the RT calculations performed by DART-Ray. Firstly, DART-Ray uses a spatial grid to discretize the distribution of the diffuse stellar emission and dust mass. The RFED distribution as well as the source functions are also evaluated on this same spatial grid. However, it is not possible to set the resolution of the spatial grid sufficiently high to attain a pre-defined level of accuracy by the end of the calculation. While creating the grid, one typically utilizes higher resolution grid elements in regions of higher stellar volume emissivity and dust density. Although reasonable to expect a more rapid variation of the radiation field in those regions, there is no guarantee that the spatial resolution is adequate everywhere on the grid. In the absence of iterative procedures to increase the spatial resolution during the RT calculation, the effect of the grid discretization on the numerical accuracy can only be checked by repeating the calculations at progressively higher spatial resolutions. Similarly, the finite number of angular directions of the rays that are cast from each radiation source, as well as the discretization of the scattering source function, also affect the calculation accuracy. Apart from these factors, common to all 3D dust RT codes although in different forms\footnote{Even in MC codes, although photon particles can propagate in any possible direction and be scattered at any location within an RT model, the number of particle directions that can be followed is still finite. This inevitably produces a discretization error which can only be reduced by increasing substantially the number of particles.}, in DART-Ray the numerical accuracy is also affected by the estimate of the extent of the source influence volumes. This is because DART-Ray calculates the RFED contributions from each source only within this volume surrounding the source itself, thus neglecting the contributions outside it. Since this is a core characteristic of DART-Ray, we are interested in quantifying the accuracy error due to the cut off of the rays when they reach the boundary of the estimated source influence volume. We note that this accuracy error is systematic, since it will always produce RFEDs which are underestimated compared to the correct value. The ray cut off occurs when the RFED contribution $\delta U_i$ carried by a ray satisfy the criterion 1. So, once a lower limit to the RFED distribution $U_{i,\text{LL}}$ has been estimated, the input-defined parameter $f_U$ is the key factor affecting the numerical accuracy of the calculation. In N14, we stated that $f_U$ should be low enough to preserve energy balance, in the sense that, at the end of the calculation, the total radiation luminosity that has been neglected because of the ray cut-off should be only a small fraction of the total luminosity of the model. In this case, the effect of cutting the rays is minimal because almost all the radiation luminosity has been followed in its propagation throughout the model. However, since this energy balance can only be checked at the end of the RT run, potentially several attempts had to be made to find the appropriate value for $f_U$. Instead, in the following, we show how the value of the parameter $f_U$ can be set before an RT run such to guarantee the desired level of accuracy. We have been able to find a relation between $f_U$ and the accuracy of the derived RFED distribution by making a minor change in the definition of $\delta U_i(r)$ in formula 1, compared to NA14. This is now defined as: $$\delta U_i(r) = \frac{\langle I_i \rangle A_{\text{EM}} \Omega_{\text{INT}} L_{\text{INT}}}{V_{\text{INT}} c},$$ where $\langle I_i \rangle$ is the average specific intensity of the ray within the ray-cell intersection path, $A_{\text{EM}}$ is the area of the emitting cell. originating the ray, \( \Omega_{\text{INT}} \) is the solid angle subtended by the intersected cell, \( L_{\text{INT}} \) its linear size of the intersected cell, \( V_{\text{INT}} \) its volume and \( c \) the light speed. This formula differs only slightly from the corresponding formula in N14: the factor \( L_{\text{INT}} \) replaces the ray-cell intersection path length, and the factor \( \Omega_{\text{INT}} \) replaces the ray beam solid angle \( \Omega_{\text{HP,EM}} \). So, in the previous version, \( \delta U_A(\mathbf{r}) \) in relation 1 was the RFED contribution of the single ray to the intersected cell RFED. Instead, \( \delta U_A(\mathbf{r}) \) now represents approximately the total RFED contribution of the radiation source originating the ray to the intersected cell. In this way, every time the criterion for \( \delta U_A(\mathbf{r}) \) is checked, the relevance of the radiation source in determining the local RFED is considered, not just that of the single ray (which can simply have a small intersection path or a small associated \( \Omega_{\text{HP,EM}} \)). With this change, an appropriate value for \( f_U \) can be derived as follows. When the condition 1 is realized during the ray propagation, one would like to assure that the small contribution \( \delta U_A(\mathbf{r}) \) does not sum up with comparable contributions from many other radiation sources which cumulatively provide a non negligible contribution to the intersected cell RFED. In the highly improbable case that all other radiation sources in the RT model provide a RFED contribution as low as \( \delta U_A(\mathbf{r}) \), the cumulative contribution \( \sum_i \delta U_{A,i}(\mathbf{r}) \) would be such that: \[ \sum_{i=1}^{N_s} \delta U_{A,i}(\mathbf{r}) \leq N_s f_U U_{A,\text{LL}}(\mathbf{r}), \] (9) where \( N_s \) is the total number of radiation sources in the model. By requiring that the RHS of the above inequality is only a small fraction \( a_{\text{RT}} \) of the final value \( U_A(\mathbf{r}) \) for the RFED, we have: \[ N_s f_U U_{A,\text{LL}}(\mathbf{r}) \leq a_{\text{RT}} U_A(\mathbf{r}), \] (10) in the above relation, the factor \( a_{\text{RT}} \) represents the desired accuracy of the RT calculation at each position. By assuming conservatively that \( U_{A,\text{LL}}(\mathbf{r}) \) is a substantial fraction of \( U_A(x) \), that is \( U_{A,\text{LL}}(\mathbf{r}) \sim 0.25 U_A(\mathbf{r}) \), we have then a relation between \( f_U \) and \( a_{\text{RT}} \): \[ f_U \leq \frac{4a_{\text{RT}}}{N_s}. \] (11) By taking advantage of the above relation, DART-Ray sets the \( f_U \) parameter to \( f_U = \frac{4a_{\text{RT}}}{N_s} \), for a given input-defined accuracy parameter \( a_{\text{RT}} \). We point out that this input-parameter can be used to control only the inaccuracy due to the blocking of the rays, not the other factors mentioned at the beginning of this section. ### 3.6. Other updates We list here other relevant updates to the code. #### 3.6.1. Point sources It is now possible to include a set of point sources at arbitrary positions within the 3D grid. This is useful for including unresolved objects, such as stars within a molecular cloud, or an AGN within a galaxy model. #### 3.6.2. Use of HDF5 files The 3D grid, the output arrays, such as the RFED and the scattering source function, and the surface brightness maps are now be written to files in Hierarchical Data Format\(^7\) (HDF5), although the output can be defined and restricted by the user. HDF5 format offers quicker I/O and smaller file sizes compared to standard ASCII output. #### 3.6.3. Internal observer maps Surface brightness maps, as seen by an observer within the RT model, can now be produced with DART-Ray. This is useful for creating images and animations for public presentations, and it allows the user to reproduce observations of the Milky Way. The output is in HEALPix format, which is a format used in all-sky surveys, including the recent Planck data in the infrared. This feature has been used in Popescu et al. (2017) and Natale et al. (in prep.) to construct a radiation transfer model of our own Galaxy. #### 3.6.4. 2D mode DART-Ray contains a 2D mode which can be used for axisymmetric models. The calculations are still performed on a 3D Cartesian grid but, in this mode, DART-Ray performs the ray-tracing calculations only for the cells located in the first grid octant. Then, taking advantage of the problem symmetries, it derives the RFED and scattering phase function contributions from cells in other octants. This mode is about a factor of 8 times faster than the standard 3D mode. ### 4. Comparison with TRUST benchmark solutions at high optical depth DART-Ray has been the only purely ray-tracing code that provided solutions for the first benchmark paper of the TRUST radiation transfer benchmark project (see G17). In that study, several codes have been used to compare the results for a geometry constituted by a dusty slab of uniform density illuminated by a star placed above it. Each code had to provide both total SEDs and images for a set of observer lines-of-sight and a number of wavelengths. Four different models have been considered which varied only for the vertical optical depth of the slab at 1 \( \mu \)m. In that paper, it is shown that the DART-Ray solutions are in good agreement with all model solutions from the other codes, except for one model, which has the largest optical depth (\( \tau(1 \mu \text{m}) = 10 \), see the TRUST benchmark website for all the comparison plots\(^8\)). In addition, it was shown that the discrepancy in the dust emission for the most optically thick case was due to the absence of dust self-heating in the old version of DART-Ray. In particular, the lack of scattered dust emission on the images produced large discrepancies between DART-Ray, as well as TRADING, and most of the other codes (see Fig. 9 in G17). DART-Ray V2 includes dust self-heating as well as dust emission scattering. In order to test that the implementation of these effects is correct, we re-calculated all the TRUST benchmark solutions with the current code. These solutions have now been added to the TRUST website where one can check that they differ at most by \( \sim 10\% \) from the other code solutions in all cases. Here we show only the comparison of the images at \( \lambda = 35.11 \mu \text{m}, \tau(1 \mu \text{m}) = 10 \) and for the edge-on view, which was taken as an example in G17 of the importance of including --- \(^7\) [https://www.hdfgroup.org](https://www.hdfgroup.org) \(^8\) [http://ipag.osug.fr/RT13/RTTRUST/BM1.php](http://ipag.osug.fr/RT13/RTTRUST/BM1.php) Fig. 4. Comparison of the $\lambda = 35.11 \, \mu m$ edge-on images of the TRUST slab benchmark for the vertical optical depth $\tau(1 \, \mu m) = 10$. The solutions provided by the old and new DART-Ray version are included as well as those of the other codes participating to the project. Units on the images are MJy/sr. The plots on the left show the average surface brightness profiles and the relative differences between the solutions along a vertical and a horizontal strip, whose boundaries are shown within the top two images (CRT and new DART-Ray code). The X-axis of these plots are in units of pixels. The inclusion of dust self-heating in the new version of DART-Ray allows a much closer agreement with the other codes. We note that these solutions are for the “effective grain” case (see G17). dust self-heating. This is shown in Fig. 4 where we included all the other code solutions as well as the old and new DART-Ray solutions. As one can see from the average vertical and horizontal profiles of the surface brightness, DART-Ray V2 produces a MIR image which is much closer to the images of the codes including dust emission scattering. The residual discrepancy is mainly due to the lower spatial resolution of the DART-Ray grid compared to that of the other codes (see G17) together with a contribution due to the ray blocking criterion up to a few percents. We found the same result for all other cases not shown here. Apart from images and SEDs, the other main quantity calculated by RT codes is the RFED. Unfortunately, this quantity is more difficult to compare because different codes use different types of grids and resolutions. For this reason, no comparison of the RFED has been made in G17 and the agreement for the dust emission has been taken as an evidence that the RFED have been calculated correctly. We note that the RFED solutions provided by DART-Ray for axisymmetric galaxy models have been compared and found in good agreement with those presented by Popescu & Tuffs (2013; see NA14). 5. The source influence volume in galaxy models The efficiency of the DART-Ray algorithm is based on its criterion to block rays expressed by Eq. (1). In the best case scenario, this criterion is satisfied after the rays have crossed only a small part of the model. This would allow the RT calculations to proceed rather quickly. Instead, if the rays have to cross a large fraction of the model before being blocked, the DART-Ray algorithm becomes inefficient. It is therefore interesting to measure the lengths after which the rays are blocked compared to the model size. In this section, we show an analysis of the distribution of these crossed lengths for the Milky Way (MW) galaxy model presented in Popescu et al. (2017). The analytical formulae describing the distribution of stars and dust opacity at each wavelength for this model can be found in that paper. Fig. 5. Distributions of the relative number of cells as a function of the relative discrepancy for the RFED for the MW model derived by assuming $a_{\text{RT}} = 0.005$ and $d_{\text{RT}} = 0.05$. The plots show the results for the UV, optical and NIR wavelengths used in the RT calculations. We note that the relative discrepancy is always lower than 5% as expected. Table 1. Median values of the average ray crossing length distribution for the Milky Way models with the optical depth scaled by the factors 0.5, 1 and 2. | | 0.5$\tau_0$ | $\tau_0$ | 2$\tau_0$ | |----------------|-------------|----------|-----------| | | UV opt NIR | UV opt NIR | UV opt NIR | | DIRECT | 0.45 0.47 0.51 | 0.44 0.47 0.50 | 0.42 0.45 0.49 | | SCA IT 1 | 0.38 0.38 0.24 | 0.39 0.39 0.27 | 0.39 0.40 0.30 | | SCA IT 2 | 0.30 0.29 0.02 | 0.34 0.33 0.03 | 0.36 0.36 0.07 | | SCA IT 3 | 0.20 0.17 | 0.28 0.26 | 0.32 0.31 | | SCA IT 4 | 0.04 0.05 | 0.18 0.18 | 0.27 0.27 | | SCA IT 5 | | 0.05 | 0.19 0.22 | | SCA IT 6 | | | 0.10 | | SCA IT 7 | | | 0.03 | Notes. For each model the median values are given for the UV, optical and NIR wavelengths and for the direct light processing phase as well as each scattering iteration. The values are in units of the model linear size. As discussed in Sect. 3.5, the size of the source influence volume (and thus the lengths at which the rays are blocked) depends on the numerical accuracy that has to be reached in the RT calculation. For the tests presented in this section, we set a maximum numerical inaccuracy to 5%. This can be achieved by setting $a_{\text{RT}} = 0.05$ since the code uses Eq. (11) to set the threshold parameter $f_U$. In order to check that Eq. (11) can be used to set the maximum inaccuracy correctly, we also performed a much more accurate calculation with $a_{\text{RT}} = 0.005$ and compared the results for the RFED for a UV (0.150 $\mu$m), an optical (0.443 $\mu$m) and a NIR (2.2 $\mu$m) wavelength. The distribution of the cells as a function of the relative discrepancy for the RFED for these two different calculations is shown in Fig. 5. As one can see, the relative discrepancy is never higher than 5% in absolute values, proving that the accuracy prescription used to set the threshold parameter $f_U$ works correctly. By assuming $a_{\text{RT}} = 0.05$, we derived the distribution of the average path crossed by rays departing from each cell for the Milky Way model at the UV, optical and NIR wavelengths mentioned above. We derived this distribution for the direct light processing phase as well as for each scattering iteration. Also, in order to see the effect of varying optical depth on the distributions, we also calculated them for Milky Way models with the dust opacity distribution artificially scaled by the factor 0.5 and 2. All these distributions are shown in Fig. 6. We note that the ray path lengths are expressed in units of the model linear size. Also, not all the scattering iteration distributions are shown in order to make the histograms clearer. The median values for all distributions are shown in Table 1. From Fig. 6 and Table 1 a few conclusions can be drawn about the sizes of the source influence volumes for each cell and for each calculation phase. Independently of the optical depth, the sizes of the source influence volume are the highest for the direct light processing phase where they can be of the order of half of the model linear size or more. However, the volume sizes decrease with the order of the scattering iterations. In particular, the decrease is rather steeper in the NIR infrared wavelength than that at UV and optical wavelengths. We note that for the NIR wavelength the sizes of the influence volume for the direct light processing are the highest while, at the same time, they shrink rapidly with the order of the scattering iterations. This is because both the optical depth and the albedo of the models at NIR wavelengths are much smaller compared to UV and optical wavelengths. In fact, because of the lower optical depth in the NIR, the ray specific intensity decreases less rapidly during the ray propagation, and the collective contributions to the RFED by many cells at large distances are more important. This makes the source influence volumes larger for the direct light. At the same time, the scattered light has much lower intensity compared to the direct light and does not contribute much to the RFED far away from the dusty cells that originate it. Therefore, the influence volume sizes for the scattered light become small very quickly. The effect of increasing the optical depth for the same geometry seems to be different for the direct light and the scattered light iterations. For the direct light, the sizes of the source influence volumes do not change much. Instead, for the same scattered light iteration, the influence volumes seem to be larger with increased opacity. There is no simple explanation for this effect. On one hand increasing the opacity increases the efficiency of scattering light. On the other, it also reduces more rapidly the ray intensity and thus the contribution to the RFED and to the scattered light intensity at large distances. The overall effect seems to be an enlargement of the source influence volumes as well as an increase of the number of scattering iterations required to complete the RT calculation. 6. Pro and cons of the DART-Ray code The DART-Ray code is one of the few 3D dust RT codes which do not use the MC method (see Steinacker et al. 2013) and the only one using an algorithm based on estimating the source influence volume extents. The originality and the relative novelty of this code are accompanied by several advantages and disadvantages: Advantages - no MC noise; - RT calculation very efficient for higher order of the scattered light; - it calculates the radiation field energy density accurately everywhere, even when its knowledge is not required to produce images; - alternative method that can be used to validate further scientific results obtained by MC codes; - allows to calculate images at arbitrary observer positions without repeating the entire RT calculation; – flexibility to change input geometry, dust model, stellar emission library; – easy to import $N$-body and SPH simulations in tipsy format. Disadvantages – high memory requirements; – lack of subgrid resolution (exploited by MC codes); – typically longer calculation times compared to those of MC codes; – direct light calculation rather inefficient when the source influence volumes are close to the entire RT model; – only Cartesian adaptive grid implemented. 7. Possible further improvements DART-Ray V2 is a major improvement compared to the code presented in NA14. Apart from the new capabilities, the code has now a solid structure and documentation that makes further development possible. The main barrier to overcome in DART-Ray is the reduction of the calculation time for models in which the sources have influence volume sizes of the order of the entire model size. For example, this typically happens for galaxy models in the infrared range, where the galaxy is more transparent and sources cumulatively contribute to the RFED at large distances. However, in the same models, the sources of scattered light tend to have rather small influence volumes and, therefore, the processing of scattered light proceeds much faster. A more efficient algorithm could be built which processes the direct light in a more efficient way than the adopted source-to-cell approach, while leaving the algorithm as it is for the scattered light processing. Acknowledgements. G.N. and C.C.P. would like to acknowledge support from the Leverhulme Trust research project grant RPG-2013-418. V.P.D. is supported by STFC Consolidated grant #ST/M000877/1. G.N. thanks Dimitris Stamatiellos for useful comments that helped improve the paper and Karl Gordon for uploading the solutions presented in this paper to the TRUST project website. We thank the anonymous referee for insightful comments on the code algorithm. References Abel, T., & Wandelt, B. D. 2002, MNRAS, 330, L53 Camps, P., Misselt, K., Bianchi, S., et al. 2015, A&A, 580, A87 Draine, B. T. 2003, ApJ, 598, 1017 Gordon, K. D., Baes, M., Bianchi, S., et al. 2017, A&A, 603, A114 Górski, K. M., Hivon, E., Banday, A. J., et al. 2005, ApJ, 622, 759 Heney, L. G., & Greenstein, J. L. 1941, ApJ, 93, 70 Kylafis, N. D., & Bahcall, J. N. 1987, ApJ, 317, 637 Natale, G., Popescu, C. C., Tuffs, R. J., & Semionov, D. 2014, MNRAS, 438, 3137 Natale, G., Popescu, C. C., Tuffs, R. J., et al. 2015, MNRAS, 449, 243 Pascucci, I., Wolf, S., Steinacker, J., et al. 2004, A&A, 417, 793 Pinte, C., Harrison, T. J., Min, M., et al. 2009, A&A, 498, 967 Popescu, C. C., & Tuffs, R. J. 2013, MNRAS, 436, 1302 Popescu, C. C., Misiriotis, A., Kylafis, N. D., Tuffs, R. J., & Fischera, J. 2000, A&A, 362, 138 Popescu, C. C., Tuffs, R. J., Dopita, M. A., et al. 2011, A&A, 527, A109 Popescu, C. C., Yang, R., Tuffs, R. J., et al. 2017, MNRAS, 470, 2539 Steinacker, J., Baes, M., & Gordon, K. D. 2013, ARA&A, 51, 63 Yusef-Zadeh, F., Morris, M., & White, R. L. 1984, ApJ, 278, 186
Forest policy development in an international perspective In its first part this paper reviews the emerging international setting of forest policy development. Its second part analyses national and local policy issues as relevant in an international perspective, and the need for a global policy framework to protect and develop forests as a multifunctional and sustainable resource. Dimensions of international policy developments Policy development and social change Forest policy development may be understood as a systematic course of action taken by a government to maintain the social and economic conditions which ensure the protection and sustainable use of forest resources. Development implies that such policies are in a constant process of adaptation, influenced by changes in society. Policy changes are determined by varying opportunities and constraints and by new aspirations and demands of the actors that can influence decisions in the political arena. An international perspective addresses the social and political dimensions of forests and the means of ensuring their conservation in a world in which often divergent national objectives are becoming increasingly independent. It refers to the international policy framework which emerges in order to provide solutions to problems of common concern to the international community. And it has to focus on relevant issues and trends which shape forest policy development beyond national perspectives. Fundamental issues of concern to the international community determine the scope of forest policy development (De Montebarnbert, 1991). Economic development as the basis for fostering the well-being of individuals and people remains one of the fundamental aspirations of our societies. But the adverse effects on the environment of an unchallenged economic growth and the irrationality of an ad hoc use of resources have become a major concern. The need for sustainability is emerging as the complementary aspect to economic growth. Forests can contribute to sustainable economic development. But in reality, in many parts of the world development leads to the destruction and destabilization of forest ecosystems. Forest resources can be managed for the benefit of present and future generations; but in reality, in many cases they are not used in such a manner. The alleviation of poverty and the satisfaction of basic human needs are of considerable global concern. Forests and trees are important to rural people and can be used in such a way that they contribute to food security and satisfy basic human needs. But in many cases local people are denied access to customary usages rights and denied the access and benefits that result from forest development. Social justice, political determination and cultural identity are values of great importance to society. The sharing of benefits from forests between local people and the national community, the participation of local users in determining forest development options, the role of forests as part of the landscape and as a national heritage, are part of such values. Aspects of international forest policy may be characterized by two mainstream developments. One is a sectoral approach based on exchange of information and experience as well as on bilateral and multilateral assistance and cooperation. The focus is to look at forests as an opportunity to use resources, to build up an economically viable forestry sector. The other is the growing political awareness of forest problems as part of environmental and conservation needs. These approaches are reflected in various recent initiatives on international forestry cooperation. Both lines also converged towards a political perspective during the United Nations Conference on Environment and Development UNCED. Institutional policy actors The Food and Agriculture Organization FAO is the specialized UN agency mandated to promote international coordination and cooperation in the field of forestry. As its name indicates, the principal task of FAO is to foster international cooperation in agricultural development and food security. Over the years its forestry department, although small compared with other resources, has made considerable efforts to create an international network for forestry and to engage in development projects. It has expanded its activities by incorporating rural forestry and agroforestry, nature and wildlife protection and the monitoring of quantitative aspects of the world's forests. The international forestry network of FAO is linked to forestry in a sectoral perspective and its major correspondents are national forest services and the ministries on which they depend. Cooperation in tropical forestry (FAO, 1990; Muthoo, 1991) has gained considerable importance during the last 20 years. Steinlin and Pretzsch (1984) describe the changes from timber production and industrial development targets to more integrative forest policies that consider rural development and maintain biodiversity. Murray (in WFC, 1991: 213-225) presents a review of major developments since the early 80's, with an emphasis on institutions engaged in international forestry cooperation. An interesting point of view on the activities of international agencies as seen from a developing country's perspective is presented by Zongo (in WFC, 1991: 305-313). The World Conservation Union IUCN has alerted the scientific community, the development planners and the general public to the dramatic impact of infrastructural projects, land colonization and development, as well as to the expansion of forest exploitation on the remaining areas of tropical rain forests. Its work on conservation strategies for living resources for sustainable development (World Conservation Strategy, 1980) and on ecological guidelines for the management of tropical moist forest lands (Poree and Sayer, 1987) has been a landmark in the debate on protecting the biodiversity of forest ecosystems. IUCN has not restricted its position and contributions to nature conservation and national park development. It has addressed the need to conserve biological diversity in managed forests and to expand the concept of sustainable wood production to sustainable forest ecosystem management (Sayer, 1991; Sayer and Wegge, 1992). The Tropical Forestry Action Plan TFAP launched in the mid 80's and revised following an independent review in 1990, has been designed to give a new momentum to international forestry cooperation. The plan provides for sector reviews and action programmes at the level of the participating countries, involving government institutions and representatives of international agencies and donors. Clément, Gane and Roberts (each in WFC, 1991: 323-349) have critically examined the performance of the TFAP and the problems encountered during implementation. The latter are related to institutional and policy shortcomings within countries as well as to the lack of a more formal structure for international collaboration. As Caberle (1992) points out, an improvement of the efficiency of the TFAP requires first and foremost a set of criteria to ensure all interested parties participate during the preparatory stages, plus full disclosure and dissemination of TFAP-related information by the national steering committees, and a participatory strategy with appropriate consultative mechanisms from the outset of national activities. The European Community EC is carrying out development activities of considerable relevance to tropical forests. Guibourg and Robbins (in WFC, 1991: 294-299) describe the environmental focus of such programmes and the relevant policies and procedures. In the field of tropical timber trade the International Timber Organization ITO, established in 1985, has become a common forum for producer and consumer countries. International development banks have increased their lending to the sector. The World Bank has revised its forest sector approach and elaborated several policy documents which are relevant in a debate on policy development. Bank’s 1990s institutional sector paper called for a different balance of Bank activities, with higher priority for environmental and rural development forestry, and for institution-building projects. However, a Bank review in 1991 examining the performance of Bank-funded development and the experience of implementation shows considerable constraints with regard to the institutional and policy framework. The most recent forest policy paper issued by the World Bank (1991) puts considerable emphasis on policy reform and institutional strengthening, participatory rural forestry programmes and the preservation of intact forest areas. This implies more vigorous efforts in sector work, recognition of intersectoral links, and more systematic incorporation of forestry into the formulation and reform of macroeconomics policy. The forest sector strategy for the Asian region (1992) confirms the need to improve the conditions for investment and policy reform and to mobilize political commitment, based on a process of sector analysis, policy dialogue and targeted investments. The creation of the International Council on Agroforestry ICRAF has been an important step towards bringing integrated land use issues into the international research network. The efforts to establish a Council for International Forestry Research CIFOR in a country with large areas of tropical forests is a necessary complementary element. Both research organizations operate as part of the Consultative Group on International Agricultural Research CGIAR system. The exchange among researchers is supported by the special programme of the International Union of Forestry Research Organizations IUFRO for developing countries. If we look at the role of the international agreements, in particular those of UN, forests and forest policy development have not been the target of vigorous international action (Cirelli, 1992). There is only one agreement of global scope dealing specifically with forests. It establishes the International Tropical Timber Organization, following a recommendation of the UN Conference on Trade and Development UNCTAD. The situation is different with regard to cooperation in the field of natural resources management and conservation: several conventions and agreements of global or regional dimension have been signed. Political perspective Parallel to the institutional evolution of international cooperation, forests have become a political issue of global concern (Maini, 1992). Political in the sense that people and citizens are concerned about the destruction of forest areas, and that they are urging their political representatives to take action. Global in that probably for the first time in the history of mankind people perceive the forests of the world as a limited resource which is endangered in many ways; global too, as citizens realize that protection and conservation cannot be ensured within their own national context alone but need international support and cooperation. Some of the reasons which make forests a global political issue are very real and specific. There is the personal experience that nearby forests are disappearing; that the daily distance walked to collect firewood and forest produce is becoming longer; that flooding and landslides occur more frequently when forests have been cut; and that forest ecosystems are suffering from emissions. Some reasons are of a more general nature and are linked to much broader concern and debate on the environment. The burning of forests which contributes to possible climatic changes is part of the public debate, as are reforestation and afforestation as a means to provide CO2 sinks. The increasing pressure on the use of renewable natural resources, the intensification of agricultural production systems and, at least in certain regions, intensive forestry land uses, make people reflect on man’s relation to nature, and on the limits to human interventions. Nature conservation and the protection of landscapes are a necessity in many parts of the world. This is part of the need of our societies to find a balance between technology and economic efficiency and a meaningful interpretation of man’s existence as part of nature. It is in this context that forests are seen as having a particular significance. They represent a cultural and spiritual symbol for the protection of nature as a whole, for maintaining mankind’s own integrity and for preserving people’s cultural heritage. Public awareness is diffuse and often controversial. It is influenced by people’s particular economic and social conditions, by their specific needs and values, as much as by their vision on the opportunities to maintain and use forest resources. But there is little doubt that today forests are perceived very differently than even a decade ago. It is the global context of environmental protection and nature conservation that has an impact on national as well as international development of forest policy. Non-government organizations are among the principal policy actors that have contributed to a more global perception of the role of forests. Environmental associations and nature conservation groups operating nationally and/or internationally raise forestry issues to a level beyond their sectoral relevance. They alert the general public to the speeding up of tropical forest destruction, the problems of forest decline, the role of forests in possible climate change. They scrutinize standards of forest use and their effects on maintaining biodiversity, and question established concepts of forest management. Environmental associations and nature conservation groups present material to the mass media in order to generate public awareness and systematically use political decision-making processes and the courts. They react to the growing concern of people by simultaneously being instrumental in politicizing conservation. Sayer (in WFC, 1991: 315-322) and Korten (1992) present the wide range of such organizations, extending from associations whose activities are determined by their members through a democratic process, to private and public think-tanks and special interest lobbies. They confirm their impact, from defending the cause of special or local interests to being promoters for the delivery of development assistance and influential participants in the formulation of forest policy at the national and international level. The recent study by Dudley (1992) on the status of temperate forests is an example of the actual contribution of such organizations to the international debate on policy. The UN Conference on Environment and Development UNCED in June 1992 put global forestry issues on the political agenda of the world community. This is in itself a significant step in forest policy development. Even in the preparatory stages it was clear that deforestation would be an important theme of the Conference and that the convention on biodiversity had important links with forest conservation and utilization. Three options to incorporate forests into the envisaged international network were discussed. Firstly, to deal with forests within the arrangements on environmental protection and biodiversity; secondly to prepare a separate convention on forests; and thirdly to deal with forestry in a less formal manner by leaving the Conference to decide on the legal status of the International Instrument on Forests. The outcome of the UNCED Conference was a statement of principles and a supportive chapter on combating deforestation as part of Agenda 21. Both documents summarize major issues on forests and forestry in a worldwide perspective. They allow the enormous variety of interests and values that determine the social and political relevance of forests to be appreciated. It is difficult to assess to what extent the Conference statement of principles on all types of forests can make a positive contribution to forest policy development. Its most constructive aspect probably results from the fact that it has been possible to achieve a broad consensus on the multifunctionality of forests based on the need to foster protection, conservation and development as common elements of any policy solution. The commitment to the importance of sustainable use and management is positive. The shortcomings of the statement of principles result from the fact that the declaration necessarily remains general and sometimes vague. Its informal and non-mandatory character does not allow a judgment to be made on priorities and on actions to be taken. By emphasizing national sovereignty and national policies, and by not making time advocating international coordination and commitment, the course of action open to nations and governments and the role of the international community remain unresolved. Further evolution: Global and regional concerns Further evolution remains open. Concern about environmental issues, deforestation and uncontrolled resource depletion will probably continue. This will lead to ongoing efforts to reach a global forest agreement, supplemented by regional protocols with specific commitments from developing and industrialized countries as well as from countries in transition to a market economy. Such an approach is possible by virtue of clause (d) in the preamble of the UNCED statement of principles. However, the lack of change and of tangible results in forest protection may lead to disappointment and frustration, focusing political attention and public resources on other internationally important issues. For the time being, international cooperation on a global level will have to continue in the prevailing institutional setting and with the established institutions of the UN system. An assessment of the status and trends of forestry institutions and proposals for institutional changes for world forestry activities has been made by Roberts et al. (1991). As far as forestry cooperation between developing and developed countries is concerned, it must be admitted that the institutional linkages are certainly no stronger than before the Rio Conference. One of the reasons for this is that the Tropical Forestry Action Plan as an important instrument of coordination has not gained the necessary international support as a global platform for fostering conservation and development. A possible evolution could also be that in the immediate future international involvement in forest policy development will concentrate on regional activities. The second meeting of the ministerial Conference on the Protection of Forest in Europe to be held in June 1993 in Helsinki offers such a regional platform. It succeeds a first meeting in Strasbourg 1990 (Barthod and Kauppila, in WFC, 1991: 265-271) and the previous SILVA conference held in Paris in 1986. The Helsinki Conference addresses in particular the preparation of guidelines for sustainable management and preserving biodiversity of the European forests, forestry cooperation with countries with economies in transition, and strategies in the context of a possible climatic change. The technical seminar on sustainable development of boreal and temperate forest to be held in Montreal in September 1993 promises to be an interesting event. The seminar will be held under the auspices of the Conference on Security and Cooperation in Europe CSCE, dealing primarily with collective security in Europe. The CSCE has recognized that future cooperation will increasingly focus on environmental issues, which will also include forests on its agenda. The seminar in Montreal provides for a critical examination of concepts of sustainable development and of the necessary forest information base. Given that the CSCE member states represent the majority of northern hemisphere boreal and temperate forests, the potential for cooperation is considerable, particularly with countries in transition to a market economy. Issues determining policy formation and implementation National forest policies in an international context The results of the 1992 UNCED Conference demonstrate that the most national governments are currently unwilling to go beyond the present stage of cooperation with respect to stimulating sustainable forest management. The representatives of the more than 150 participating governments were not in a position to compromise on fundamental issues of forest protection, to reach a consensus on more vigorous international cooperation and to agree on binding arrangements on global policy measures. The international community is far from having a clear and well structured policy programme which allows global forestry policies to be realized. The outcome of the Conference as regards forests is no more than a necessary step in a long process of policy formation. The reasons for the lack of progress are largely general in nature. They refer to the lack of consensus on global and more equitable solutions to energy use and industrial development, international trade, technology sharing and transfer of financial resources. More than other Conference themes the debate on forests has been dominated by the fear of many countries that they will lose their sovereignty to determine the use of the renewable resource according to their own economic and social development targets. The debate has also been influenced by the marked inability of the parties concerned to place the two principal elements of a possible agreement, forest protection and conservation on the one side and forest development and management on the other side, in a complementary and not in a contradictory context. Progress in international cooperation will primarily require agreement on priorities with regard to forest use and consistent forestry programmes at the national level. Forests are a multifunctional resource of the rural space, an integrative part of landscape, a source of great biodiversity and of considerable importance in maintaining stable environmental conditions. Forest conservation implies a balance of interests between forest owners, land users and the community which benefits from sustainable and multifunctional resource management. Policy development has to set the boundary conditions in order to achieve this balance for the present generation and to maintain an equal option for later generations. Addressing the complex social and economic factors involved is by no means limited to what is traditionally understood as forest policy. Under the heading "Forests and forestry in national life", Van Maaren (1984: 3) presents national forest policy development as "a continuous process designed to maintain the balance..." between the forest resource as the potential supplier on the one hand and the various components of society as the consumer on the other hand”. He shows the relations between forest policy and society, politics, science and technology, and the major task to be accomplished (Fig. 1). This task is to find a balance between long- and short-term objectives in order to meet local rural needs and industrial wood production. He stresses that society has to understand the contribution of forest and trees to the well-being of all of its members, and that forest policy and other sectoral policies are interdependent. It is this concept which allows the role and functioning of policy development and its impact on the sustainable use of resources to be appreciated. However, in many cases forest policies as they have been conceived and understood tend to be rather technical and bureaucratic declarations of intent with little political support and of limited interest to people. Quite often forests are considered as a residual among other expanding land uses. Consequently, forest policies also appear to be residual and often to impede economic and social development. Forest conservation does not find the necessary support as a social and economic priority and many countries currently do not have a consistent policy framework to protect their forest assets effectively and to ensure sustainable forest uses. There are many reasons for this, depending on the particular situation of a country and the stage of development of its society. The general cause results from fundamental deficiencies in the political system. People who are interested in forest conservation and benefit from forest uses are unable to bring their views to the political arena and to influence the political decision-making process. People suffering from forest devastation and destruction or from an appropriation of a local resource cannot intervene vigorously in policy development. **Political commitment and benefits for people** Political commitment to the protection and sustainable use thus requires institutionalized democratic participation of those people who are principally interested in the forest resource. It implies their involvement as actors and interest groups in determining the priorities of national development and their participation in decision-making processes on forest resource planning and management. The role of forests as a national and a local common resource does not allow forest management decisions to be dealt with mainly by a technical and bureaucratic approach. The need to generate political commitment and to expand democratic participation in forest policy development is a strong argument for transferring institutional powers to regional and local entities. Federal state organizations, decentralization of central powers and increased local autonomy are constitutional and political principles receiving considerable attention in many parts of the world. They offer an opportunity to define a new balance of responsibilities in forest resources management between state governments, regional and local entities and local government. Such a political approach can fill the widening gap between global public perception of the role of forests and the lack of policy formation and implementation at national and local levels. Forests are maintained if their protection and use generates benefits to people. They are cleared when people see more benefits from a change in land use. They are burnt and destroyed when people have no alternative to ensure their livelihood. Benefits result from a sustainable use of forest resources if ownership and usage rights are firmly acknowledged. This calls for management regulations which do not disregard usage rights and provoke their abolition but offer firm support in maintaining such rights on a sustainable basis. It implies acknowledged and statutory access of local communities to the resource by land title registration of communal forest land and by introducing new forms of communal forest ownership. It needs a positive approach in encouraging sustainable forest management on private land and land use agreements which allow forestry and agroforestry to be practised on public land. The key role of stable and flexible forest tenure and ownership rights to land for conservation and sustainable management has been stressed in most policy reviews. Bromley and Cernea (1989) have pointed out that resource degradation in developing countries is incorrectly attributed to common property systems. In fact the dissolution of local-level institutional arrangements leads to common property regimes with a sustainable pattern of use being transformed into open access regimes, in which the rule of privatization does not imply resource protection and development. It is thus important to examine critically the relation between property rights and resource management and to re-establish customary uses and local forest tenure as part of a viable policy framework. The principle that conservation and sustainable use of the resource must be associated with benefits to rural people requires the transfer of public funds based on an equitable cost sharing between forest users and public entities. Cost sharing and financial compensation are of considerable importance in providing a balance of interest between the immediate goals of forest owners and local user groups and the longer-term objectives of the community as a whole. The latter are principally related to non-market values, which are generated from maintaining forest areas and from an appropriate resource utilization. Policy has to address this situation by providing grants for the improvement of the resource base and its productive potential, compensation and cost-sharing arrangements for forest management activities in the public interest, and compensation for curtailing forest uses incompatible with nature and landscape preservation. Forest conservation and development thus require incentives for sustainable use, and financial compensation for forest owners and local user groups. In reality, however, the situation is different. Forest policy has taken less account than other sectors that rural development, including forestry, requires positive signals and incentives in order to stimulate the population's initiative and acceptance. Policy measures in forestry still largely rely on a set of repressive legislative measures. In certain regions forestry programmes lead to an accumulation of benefits in urban centres, to a diminished productive potential of the utilized forests and to disinvestments in rural areas. Forests are generally considered as an economic asset and a source of public revenue with very little understanding that the sustainable use of this resource requires private and public investment, political responsibility and professional competence. The common attitude that forests are a resource to be tapped but not a resource to be paid for is one of the principal obstacles in protecting and managing forests and forest lands. This is in striking contrast to the experience of many European countries. The build-up of a productive and sustainable forest economy has been based on long-term investment efforts of forest owners, rural communities and governments over several generations. It is this experience which Europe can contribute in the international debate on forest policy. Forests and land-use policies The existence or disappearance of forest areas, as much as the importance of forestry outputs and services is often determined by policy developments which influence the framework of forest conservation and management much more than forest policy itself. In their overview on economics and policy analysis Hyde and Newman (1991) presented the impact of agricultural land development and the relevance of sectoral policies on sustainable forestry as one of the major conclusions. The interdependence existing between sectoral and cross-sectoral policies and the applicable legislative framework are discussed in De Montalbert and Schmithuesen (1993). Looking at agricultural policy, for instance, it is obvious that changes in forestry land use are profoundly determined by agriculture in two ways. The need for new land and the expansion of farming zones leads to forest clearance and, in the tropics now, to large-scale forest destruction. This statement is not a judgment on the social and political justification of changes in use. But it is an indication of the fact that forest protection, if socially and politically desirable, can be accomplished only by improving agricultural land use and by changing agricultural policies. Outside the tropics agriculture is selling aside considerable areas suitable for reforestation and sustainable forest development. Such land remains a resource providing employment to farmers and local communities; agricultural and rural development policies are the principal factors for inducing and sustaining such land-use changes. The objective of integrating trees, woodlots and forests into agricultural and rural development also implies more farmer-oriented thinking by foresters (Van Maaren, 1987, 1988). A similar perspective exists on nature conservation and forestry and development. The protection of certain forest areas may be socially and politically desirable and justified for reasons of nature conservation, but it may also be an important limitation to the production of forestry outputs. This calls for nature conservation policies that provide financial compensation because of the need to restrict such outputs. On the other hand, a policy of preserving nature cannot consider all forests as potential nature conservation areas. It must acknowledge and support the multifunctional role of forests as an economic resource for rural and industrial development. It can provide criteria and guidelines fostering sustainable use, and silvicultural practices close to nature and for maintaining biodiversity in forest management. What of the relevance and impact of policies related to infrastructure and settlement, industrial and urban development and global environmental protection? Road construction and settlement projects providing access to forest areas offer development opportunities but also create economic and social conflicts about land use and resource conservation. Urban development may require the clearance of forests, by simultaneously increasing the need to maintain forest areas for recreational uses. None of these issues can be settled by forest policy measures alone. Global policy framework for maintaining a multifunctional resource The significant point of such conclusions is that complex social and economic problems of a multifunctional resource like forests can only be addressed in a complex and multifunctional policy framework (Van Maaren, 1991). Each of the relevant policy areas has to accept accountability for imposing new demands on the use of land and forests. The present trend to make environmental impact assessment a prerequisite for new development projects is one step in the right direction. But it is not sufficient. As long as policies related to agricultural and rural development as well as to infrastructure and industrial development do not reflect their impact on forests as part of their own policy formation, there is little chance of making real progress in conservation and sustainable use. As long as such policies do not offer their specific contribution to solving the arising land-use conflicts, policy implementation will remain largely wishful thinking. For forests as a multifunctional resource which engages the responsibilities of all relevant sectoral policies, there also needs to be global policies for forest protection, conservation and development. This implies that the improvement of sectoral forest policies is necessary but does not provide viable solutions alone. It implies that intersectoral coordination of policies is essential but not a definite answer. It implies that forests can be maintained and used for private and public benefit only if society acknowledges such an objective in its own right and if policy formation cuts across sectoral borders. It is this perspective in policy development which is urgently needed, at the level of local and regional entities, at national level, and primarily at the level of the international community. The previous statements do not diminish the importance of the development of sectoral forest policy. Nor do they play down the necessary contribution of national and international forest institutions and agencies, and of the accountability of professional forestry. However, they do allow forest policy to be placed in a more global and integrative context. Forest policy is fundamental in order to provide the framework for using forests as an economic resource, for ensuring sustainable production of wood and forestal services, and for fostering forestry and the development of the forest industry. It is crucial in playing a coordinating and monitoring role among the various policies relevant in maintaining the resource. Forestry institutions are the principal agents for promoting forest development and for ensuring the necessary interlinkage to other sectors. Making professional forestry accountable provides a critical mass of expertise to manage the resource according to multifunctional and changing requirements. Forest policy, forest services and foresters can significantly contribute to promoting the protection and rational use of forests. But it is society with its complex political objectives which decides whether forests are maintained and how they are to be used and managed. Conclusion In spite of a change in public perception, forests are not considered as a global resource or as global commons. They are primarily a national resource for economic development, and as the Rio process has shown, governments are not prepared to accede to international pressure and modify their sovereign policies. Within countries, forests are largely a local resource, inasmuch as their use is part of the development of the rural space and is subject to considerable social sensitivity and political pressure from a wide range of local actors. National and local policy development and implementation are the principal requirements for any cooperation in protecting forest resources. International efforts will have a very limited impact if they fail to address the need to strengthen the institutional framework of national and local forest resources management. Forestry and forestry have become part of the national and international public debate. Their political relevance is increasingly determined by general issues related to economic growth and sustainable use of resources, to the impact of human activities on nature, landscape and the environment, and to social justice among people and nations. On the other hand, forests and forestry remain of immediate interest and represent very specific benefits and values to rural people and local communities. The legitimate demands of the latter to determine the use of forests and forest lands are opposed by global and national economic objectives. The task to be accomplished is to reconcile the demands made of the rural space, in which forests and forestry are a reality, with national and international requirements. The solution must be a political one, based on the fundamental values of each nation. Democratic participation in making decisions which affect the conservation and use of forests, equity in sharing the benefits but also the costs of sustainable forest development on all levels of the community, and a more integrative effort of governmental institutions are the linchpins of a political solution. Policy development emanates from political commitment to maintaining forests as a renewable resource. It should be based on a realistic approach to implementing social and economic objectives. The approach must be global inasmuch as the conservation and use of forests are determined by all land-users and in particular agriculture, grazing, and infrastructure and urban development. The conservation of forests, woodlots and trees as an essential element of rural and urban areas is part of the responsibilities and tasks of a wide range of sectoral and cross-sectoral policies. Policy development must be specific inasmuch as forests are a multifunctional resource with particular opportunities for social and economic development. Forest policy has to set the framework for sustainable use and management considering all benefits and not timber production alone. And it has to provide the necessary links to other sectors and to foster the supportive role of forestry. The analysis of the global and specific aspects of policy development with regard to forest conservation and sustainable use, and the evaluation of its results and also of the prevailing obstacles and deficiencies, is a new challenge to the science of forest policy. References* Bromley, D.W. and Cernea, M.M. (1986). The management of common property natural resources: Some conceptional and operational issues. World Bank Discussion Paper 57: 66 pp., Washington D.C., USA. Cabrelle, B. (1992). Close encounters? NGOs and the TFAP. Unasylva 43/171: pp. 30-33. Chapman, T. (1992). International agreements related to the protection, management and utilization of forest ecosystems. Forstwissenschaftliche Mitteilungen 1992/11: pp. 25-37. Professor für Forstökonomie und Forstökonomie, ETH Zürich, Switzerland. Dudley, N. (1992). Forests in trouble: A review of the status of temperate forests worldwide. World Wide Fund for Nature, 244 pp. and annexes, Gland, Switzerland. FAO (1989). State of international cooperation in tropical forestry. Secretariat Note prepared for the Committee on Forest Development in the Tropics, Rome, July 1989. Haggart, W.F. and Newman, D.H. (1991). Forest economics and policy analysis: An Overview. World Bank Discussion Paper. 134: 92 pp., Washington D.C., USA. Kearon, F. F. (1992). NGO's and the forestry sector: an overview. Unasylva 43/171: pp. 3-10. Maaren, A. van (1984). Forests and forestry in developing lands. Hummel, F.C. "Forest policy: a contribution to resource development", Nijhoff/Junk Publishers, The Hague, pp. 1-19. Maaren, A. van (1987). Die Forstwirtschaft auf den Niederlanden auf neuen Wegen. Allgemeine Forst Zeitschrift 42/30: pp. 761-763. Maaren, A. van (1988). Änderungen in der EG-Landwirtschaftspolitik: eine Aufgabe für die Forstpolitik? Forstarchiv 59/6: pp. 211-215. Maaren, A. van (1991). Integrierte geistesgeschichtliche Forstliche Lehre - eine neuartige oder erneuernde Ergänzung der Forstpolitikwissenschaft. Forstarchiv 62/6: pp. 219-223. Matt, J.S. (1982). Sustainable development of forests. Unasylva 43/169: pp. 3-8. Montalambert, M.R. de (1991). Key forestry policy issues in the early 1990's. Unasylva 42/166: pp. 2-15. Montalambert, M.R. de and Schmitthüsen, F. (1993). Policy, legal and institutional aspects of sustainable forest management. Arbeitsbericht Nr. 1993/1. Rektorat 2031, Professor für Forstpolitik und Forstökonomie, ETH Zürich, Switzerland. Muthos, M.K. (1981). An overview of the FAO Forestry Field Programme. Unasylva 42/166: pp. 30-39. Poore, D. and Sayer, J. (1987). The management of tropical moist forest: latest ecological guidelines. IUCN, Gland, Switzerland. Roberts, R.W., Pringle, S.L. and Nagle, G.S. (1991). Leadership on World Forestry. Discussion Paper, 12 pp. Canadian International Development Agency, Ottawa. Sayer, J. (1991). Conservation and protection of tropical rain forests: the perspective of the World Conservation Union. Unasylva 42/166: pp. 40-45. Sayer, J. and Wegge, P. (1992). The role of production forest in conserving tropical diversity. IUCN/ITTO, Yokohama, Japan. Steinlin, H. and Preitzsch, J. (1984). Der tropische Feuchtwald in der internationalen Forstpolitik. Holzgewerbeblatt 138. WFC (1991). Proceedings of the 10th World Forestry Congress, Vol. 8: pp. 213-225, 265-271, 294-299, 305-349. *: abridged by the editors; full references can be obtained from the Department of Forestry, Agricultural University, Wageningen, The Netherlands.
Delay and escrow in the blockchain (draft) A W Roscoe∗Chieftin Lab, Shenzhen July 24, 2018 Abstract In this paper we show how to implement exact-time delay encryption in a trust environment like the blockchain, where we can be confident that some sort of majority of participants are trustworthy but not any individual one. In other words we give a protocol for generating \( \text{delay}(x, t) \), a value which gives no significant information until time \( t \), whereupon it can be decrypted to \( x \) by anyone. We highlight some applications of this construct and show how it can be extended to a more general form of escrow. 1 Introduction Time-lock encryption was first described in the 1990’s, generally the implicit assumption that the delay would be long. \( \text{delay}(x, t) \) is a value which can be decrypted by anyone at time \( t \) or beyond, but by no-one before this time. In [?, ?], the author showed how it could be used to create protocols and a mechanism for stochastic fair exchange by carefully denying participants information that might allow them to cheat, or cheat more effectively, if they had it to early. In these applications there is no hard-and-fast schedule for the delayed information to become available. All the protocols need is that it does not become available before some \( t \) and can be extracted by anyone some reasonable time after \( t \): we will term this lower bound delay. This admits an implementation without a trusted third party (TTP); the party creating it applies a function with a publicly known inverse that takes a significant amount of sequential computation to compute; sufficient so that no-one can compute the inverse before \( t \). ∗Also University College Blockchain Research Centre It is not obvious, however, how to implement *exact delay* (where anyone can obtain $x$ immediately on $t$) without a TTP. With a TTP there are various options. In Section [?] we will show how it can be achieved. We then build a model of a blockchain system and show how it can both work on that and provide useful security for some trading applications and smart contracts. We then show how the same ideas can be used to create a blockchain generalised escrow system. ## 2 Implementing exact delay If we had a TTP Sam then exact delay could be implemented as follows. Sam is programmed to create a new key pair $(pk_r, sk_r)$ for each time in a series $t_0, t_1, t_2, \ldots$. Well before time $t_r$, Sam signs a certificate announcing that $pk_r$ is the key for time $t_r$. At time $t_r$ (not before or after), Sam releases $sk_r$. Now Alice can create a delay of $X$ to any time $t_r$: she simply reads $pk_r$ and then $\text{delay}(x, t_r)$ is $\{x\}_{pk_r}$ (where it might be desirable to add salt depending on the application). Clearly anyone can obtain $x$ beyond the appointed hour. Of course if Sam were not trustworthy he could fail to deliver $sk_r$ on schedule, release it early, or tell his friends the value early. We imagine that if Alice has issued $\text{delay}(x, t)$ to someone before $t$, then it might not be in her best interest for $x$ to become visible at $t$, or indeed Alice might be offline at that time. It follows that Alice cannot be relied on to do her own releasing (which she could of course do). So we need to find a way to guarantee the release of $x$ at $t$ without trusting any single party. One description of the blockchain is that it represents a trusted third party made up of many individually untrustworthy actors. However it is not the sort of TTP that is obviously usable for creating exact delay. We will form an abstraction of what can be trusted of it later. What we can do, however, is exploit the same trust model assumed in the blockchain and give the participating processes additional capabilities which we assume are performed within the same trust model. The work in this paper is particularly suited to trsut model of private and *mainstream* hybrid blockchains. In each of these it is possible to identify parties who are motivated to work in a rustworthy manner for reasons other than because they cannot profit from not doing so. Rather than have a single process creating key pairs, we assume that we can select $N$ (may be all, may be not all the blockchain parties) participants with the property that there is some $k$ such that $2(k - 1) < N$ and where no more than $k - 1$ of the chosen participants is untrustworthy. We ask all of these $N$ processes $P_j$ to create a key pair $(pk_{ji}, sk_{ji})$ for each $t_i$, and individually to release the keys $pk_{ji}$ and $sk_{ji}$ on the same schedule as outlined above. To create $\text{delay}(x, t_i)$ Alice now uses a threshold encryption scheme such as Shamirs [?] to deliver $N$ shares $s_j$ of $x$ such that any $k$ of them reveal $x$ but $k - 1$ reveal nothing. She encrypts $s_j$ with $pk_{jr}$ (where available), and $\text{delay}(x, t_r)$ is just the combination of these $\{s_j\}_{pk_{jr}}$. An untrustworthy participant can do one of the following to try to frustrate us: - He can fail to produce $pk_{jr}$. But at least $k$ do. - Where he has released $pk_{jr}$, he can release it early or late. But at least $k$ correct values do get released at $t_r$ and the shares $s_j$ deducible from the $sk_{jr}$ released early tell us nothing. - He can release wrong values for $pk_{jr}$ of $sk_{jr}$. But the integrity of such a pair can be checked and has nothing to do with $s_j$. It follows that Bob (and everybody else who has $\text{delay}(x, t_r)$) can get $k$ correct shares and deduce $x$, but that no-one can access $x$ through this value before that. ### 2.1 Blockchain assumptions and applications The blockchain is, at the time of writing, widely touted as a solution to many problems in distributed data storage, asset registers, and transaction execution. There are a number of different views of what a blockchain (or distributed ledger) is, what can be assumed of it, and how it should be used. For us it is the following: - A database with a collection $U$ of users, of whom a subgroup $M$ are “miners”. Some users can be tied to real-life entities, and some are anonymous pseudonyms. - Anyone can write into the database. They have a choice of whether to sign such items or not. - The miners decide which items succeed in being written by a consensus mechanism. They only have the right to reject a write if accepting it would violate a consistency rule of the blockchain (e.g. a double spending transaction). They use some consensus mechanism to achieve this. - The miners create blocks of writes which are issued in a strict sequence, which is enforced by each non-initial block including a cryptographic hash of its immediate predecessor. The blocks are internally authenticated by hashing (Merkle Trees). - They have a time-stamping mechanism that assigns times to items in blocks such that all times in a successor block are greater than all in its predecessor. - Depending mainly on whether this is a public (i.e. anyone can mine) or private (mining is restricted to relatively few authorised parties) blockchain, there is the possibility that an issued block can be voted out of existence, so that history can change. Blockchains generally represent assets as unspent transactions: there is a transaction transferring some money, shares, land etc. to Alice, and she has not spent it yet. Transactions between anonymous identities are effectively anonymous: ownership comes down to knowledge of some key. So although everyone using a blockchain can see what transactions have happened on it, the fact that identities can be concealed, together with other information (such as what is being transferred) that is not essential to the ledger can be concealed. So in particular all details of a transaction that are not required to be present simply for the blockchain to function can be delay encrypted. Many stock exchanges and other services will require much greater transparency than this, meaning that things like the beneficial owners (before and after a transaction) may need to be recorded on a transaction. In current exchanges such information may be included but be restricted to certain parties, or only be made available (say) 30 minutes after the transaction. The author was asked by a stockbroking firm how such things could be made consistent with a blockchain where everything was public. The answers seemed obvious: use encryption where the subsequent access did not increase with the passage of time, and exact delay encryption (possibly coupled with ordinary encryption) where is did. However the author did not then know how to implement exact delay encryption without a TTP, so in a strong sense that conversation inspired the present paper. The motivation for keeping transaction details secret is to keep the trading activity of some investor or broker secret so that others cannot make use of the information in deciding their own activity. Such encryption can conceal who a transaction is between, but cannot conceal the fact that trading in some security (for example) is happening. It is, however, possible for anyone who owns something to transfer it between two identities he owns, and until the delayed information is in the open it will look exactly the same as a real transaction. Thus real trades can, to some extent at least, be camouflaged. Exact delay encryption is clearly also of great use in distributed sealed-bid auction and tendering protocols: bids must be sealed by delay encryption until the time (after the end of bidding) when they are opened. This is effectively an anti-corruption measure. It might also be used in e-voting protocols to prevent anyone from counting votes until the polls have closed. 3 Implementation In any implementation we need to assess the threat model to decide how many parties need to generate keys for each time, and what number of them can be considered trustworthy. Is there any group of nodes that are considered more trustworthy: perhaps the miners in a private blockchain environment? If so should generating key pairs be limited to them? In environments like a public blockchain, what motivation do we need to provide for participants to perform their role and do so in a trustworthy fashion. We imagine that the reward will take a similar form to that for mining, and that a node will be severely penalised for doing something wrong unless (in the case of failure to post keys) it has a good excuse. Noting that a node passing keys early to its friends is not necessarily spottable, we can institute a mechanism whereby any node that demonstrates knowledge of another node’s secret key early can claim a large penalty from it. By and large the penalties for failing in one’s duty might be so large that the likelihood of any not delivering keys as required is very small. It will also have to be decided what granularity time will have: will key pairs be issued per second, minute, hour or day? If this granularity is smaller than the rate at which a blockchain delivers blocks, then we cannot rely on the blockchain as the mechanism for broadcasting secret keys, though there is no problem with recording the public keys, since they can be posted in groups before they are needed. Note that a secret key can always be verified relative to an already-posted public one. There are various potential mechanisms here, depending on circumstances. It may well be that some external agency is accepted as a reliable model of time and is used for the timestamping of blocks. Possibly the release and availability of secret keys can be judged by a collective or fault-tolerant mechanism judged relative to this. It must be clear, however, that the time intervals between keys which must be verifiably released at regular intervals must be several times greater than the latency of the network connecting the nodes. If the only source of trading information is the underlying blockchain then there may be no reason to have a higher rate of release of secret keys. However in general we must expect that information is probably coming from more rapid data streams. 4 Generalised escrow One can imagine a generalised delay operator that releases its contents under more general circumstances than the arrival of a particular time. If \( r \) is condition based on time \( t \) and features of the state \( s \) that are - Observable deterministically everywhere with the same result. - Once true remain true: a change of state cannot make such a condition false when it was true. Therefore in essence they are equivalent to something of the form “There has been a past state such that \( P \)”, then it makes sense to escrow information \( x \) so that it is released when \( r \) is true. The conditions above make it unambiguous when \( x \) is to be released, even when different nodes may make independent assessments at slightly different times. We can thus imagine a generalised form of delay that we can write escrow\((x, r)\). This can be implemented in exactly the same way as delay except that it is not reasonable to expect nodes to create key-pairs tied to arbitrary conditions without prompting. It follows that anyone creating escrow\((x, r)\) will need to obtain the keys from enough parties and tie them to \( r \) so that it can create the encrypted shares of \( x \) that are needed. There will thus need to be a marketplace of keys which can be obtained from other parties who are prepared to release a public key tied to \( r \) (by signature) and monitor \( r \) to determine when to release the secret key. Such keys can be reused if multiple \( x \)'s are escrowed by the same \( r \). Examples of $r$ are - $t \geq t_0$ (giving the equivalent of $\text{delay}(\cdot, t_0)$) - Company $X$ has breached condition $p$ (determiniable from observable information) at some previous time. - A legal warrant for the release of $x$ has been placed on the blockchain. - The price of shares in $X$ has exceeded £5 - The price of shares in $X$ was greater than £5 on 20 September 2017. The information required to evaluate such $r$ should be stored in a form where they can reliably be computed: the same timed information should be available to all. Of course information in the blockchain automatically has this property. In situations where keys are released dependent on information stored in a blockchain, this approach (and indeed presumably any other approach to escrow) requires care if nodes depend on blocks that may later be deleted. For if enough nodes determine that condition $r$ is true based on a history $h,b$ and release the keys associated with $r$, it requires a considerable leap of faith to believe that all who have heard these keys will forget them when told that $b$ has been replaced by $b'$. In such circumstances it may therefore be necessary to tie the release of keys (or at least the determination of whether $rs$ are true or not to the voting on blocks: if I base the computation of $r$ on a given history, I must be prepared to vote for it. Furthermore the parameters chosen must then ensure that the given history has enough support to ensure its success. This escrow is therefore much easier in blockchains where we can be confident of no branching history relevant to the determination or $r$. Returning for a moment to the subject of securing smart contracts, it is clear that any smart contract can now be separated into the event that triggers it (which must be public) and escrowed code that it performed on this. One of the actions of such a contract may be to perform another smart contract which is similarly delayed: our framework supports arbitrary nesting of this sort, ## Conclusions We have demonstrated that threshold cryptography is the key to implementing exact delay encryption in an environment where the majority of players can be assumed trustworthy. We have also shown that this can be generalised provided we can have more elaborate key generation and an underlying data semantics like that of a typical private blockchain, which should also generalise to some public ones.
Analysis of Multi-Organization Scheduling Algorithms Johanne Cohen, Daniel Cordeiro, Denis Trystram, Frédéric Wagner To cite this version: Johanne Cohen, Daniel Cordeiro, Denis Trystram, Frédéric Wagner. Analysis of Multi-Organization Scheduling Algorithms. Pasqua D’Ambra and Mario Rosario Guarracino and Domenico Talia. Parallel Processing, 16th International Euro-Par Conference, Aug 2010, Ischia, Italy. Springer, 6272, pp.367–379, 2010, Lecture Notes in Computer Science. <inria-00536510> Analysis of Multi-Organization Scheduling Algorithms Johanne Cohen\textsuperscript{1}, Daniel Cordeiro\textsuperscript{2}, Denis Trystram\textsuperscript{2}, and Frédéric Wagner\textsuperscript{2} \textsuperscript{1} Laboratoire d’Informatique PRiSM, Université de Versailles St-Quentin-en-Yvelines 45 avenue des États-Unis, 78035 Versailles Cedex, France \textsuperscript{2} LIG, Grenoble University 51 avenue Jean Kuntzmann, 38330 Montbonnot Saint-Martin, France Abstract. In this paper we consider the problem of scheduling on computing platforms composed of several independent organizations, known as the Multi-Organization Scheduling Problem (MOSP). Each organization provides both resources and tasks and follows its own objectives. We are interested in the best way to minimize the makespan on the entire platform when the organizations behave in a selfish way. We study the complexity of the MOSP problem with two different local objectives – makespan and average completion time – and show that MOSP is NP-Hard in both cases. We formally define a selfishness notion, by means of restrictions on the schedules. We prove that selfish behavior imposes a lower bound of 2 on the approximation ratio for the global makespan. We present various approximation algorithms of ratio 2 which validate selfishness restrictions. These algorithms are experimentally evaluated through simulation, exhibiting good average performances. 1 Introduction 1.1 Motivation and Presentation of the Problem The new generation of many-core machines and the now mature grid computing systems allow the creation of unprecedented massively distributed systems. In order to fully exploit such large number of processors and cores available and reach the best performances, we need sophisticated scheduling algorithms that encourage users to share their resources and, at the same time, that respect each user’s own interests. Many of these new computing systems are composed of organizations that own and manage clusters of computers. A user of such systems submits his/her jobs to a scheduler system that can choose any available machine in any of these clusters. However, each organization that shares its resources aims to take maximum advantage of its own hardware. In order to improve cooperation between the organizations, local jobs should be prioritized. To find an efficient schedule for the jobs using the available machines is a crucial problem. Although each user submits jobs locally in his/her own organization, it is necessary to optimize the allocation of the jobs for the whole platform in order to achieve good performance. The global performance and the performance perceived by the users will depend on how the scheduler allocates resources among all available processors to execute each job. 1.2 Related Work From the classical scheduling theory, the problem of scheduling parallel jobs is related to the Strip packing [1]. It corresponds to pack a set of rectangles (without rotations and overlaps) into a strip of machines in order to minimize the height used. Then, this problem was extended to the case where the rectangles were packed into a finite number of strips [15, 16]. More recently, an asymptotic \((1 + \epsilon)\)-approximation AFPTAS with additive constant \(O(1)\) and with running-time polynomial in \(n\) and in \(1/\epsilon\) was presented in [8]. Schwiegelshohn, Tchernykh, and Yahyapour [14] studied a very similar problem, where the jobs can be scheduled in non-contiguous processors. Their algorithm is a 3-approximation for the maximum completion time (makespan) if all jobs are known in advance, and a 5-approximation for the makespan on the on-line, non-clairvoyant case. The Multi-Organization Scheduling problem (MOSP) was introduced by Pascual et al. [12, 13] and studies how to efficiently schedule parallel jobs in new computing platforms, while respecting users’ own selfish objectives. A preliminary analysis of the scheduling problem on homogeneous clusters was presented with the target of minimizing the makespan, resulting in a centralized 3-approximation algorithm. This problem was then extended for relaxed local objectives in [11]. The notion of cooperation between different organizations and the study of the impact of users’ selfish objectives are directly related to Game Theory. The study of the Price of Anarchy [9] on non-cooperative games allows to analyze how far the social costs – results obtained by selfish decisions – are from the social optimum on different problems. In selfish load-balancing games (see [10] for more details), selfish agents aim to allocate their jobs on the machine with the smallest load. In these games, the social cost is usually defined as the completion time of the last job to finish (makespan). Several works studied this problem focusing in various aspects, such as convergence time to a Nash equilibrium [4], characterization of the worst-case equilibria [3], etc. We are not targeting here at such game theoretical approaches. 1.3 Contributions and Road Map As suggested in the previous section, the problem of scheduling in multi-organization clusters has been studied from several different points of view. In this paper, we propose a theoretical analysis of the problem using classical combinatorial optimization approaches. Our main contribution is the extension and analysis of the problem for the case in which sequential jobs are submitted by *selfish organizations* that can handle different local objectives (namely, makespan and average completion times). We introduce new restrictions to the schedule that take into account the notion of *selfish organizations*, i.e., organizations that refuse to cooperate if their objectives could be improved just by executing earlier one of their jobs in one of their own machines. The formal description of the problem and the notations used in this paper are described in Section 2. The Section 3 shows that any algorithm respecting our new selfishness restrictions can not achieve approximation ratios better than 2 and that both problems are intractable. New heuristics for solving the problem are presented in Section 4. Simulation experiments, discussed in Section 5, show the good results obtained by our algorithms in average. 2 Problem Description and Notations In this paper, we are interested in the scheduling problem in which different organizations own a physical cluster of identical machines that are interconnected. They share resources and exchange jobs with each other in order to simultaneously maximize the profits of the collectivity and their own interests. All organizations intend to minimize the total completion time of all jobs (i.e., the global makespan) while they individually try to minimize their own objectives – either the makespan or the average completion time of their own jobs – in a selfish way. Although each organization accepts to cooperate with others in order to minimize the global makespan, individually it behaves in a selfish way. An organization can refuse to cooperate if in the final schedule one of its migrated jobs could be executed earlier in one of the machines owned by the organization. Formally, we define our target platform as a grid computing system with $N$ different organizations interconnected by a middleware. Each organization $O^{(k)}$ ($1 \leq k \leq N$) has $m^{(k)}$ identical machines available that can be used to run jobs submitted by users from any organization. Each organization $O^{(k)}$ has $n^{(k)}$ jobs to execute. Each job $J_i^{(k)}$ ($1 \leq i \leq n^{(k)}$) will use one processor for exactly $p_i^{(k)}$ units of time\footnote{All machines are identical, i.e., every job will be executed at the same speed independently of the chosen machine.}. No preemption is allowed, i.e., after its activation, a job runs until its completion at time $C_i^{(k)}$. We denote the makespan of a particular organization $k$ by $C_{\text{max}}^{(k)} = \max_{1 \leq i \leq n^{(k)}} (C_i^{(k)})$ and its sum of completion times as $\sum C_i^{(k)}$. The global makespan for the entire grid computing system is defined as $C_{\text{max}} = \max_{1 \leq k \leq N} (C_{\text{max}}^{(k)})$. 2.1 Local Constraint The Multi-Organization Scheduling Problem, as first described in [12] consists in minimizing the global makespan ($C_{\text{max}}$) with an additional local constraint: at the end, no organization can have its makespan increased if compared with the makespan that the organization could have by scheduling the jobs in its own machines ($C_{\text{max}}^{(k)}$ local). More formally, we call MOSP($C_{\text{max}}$) the following optimization problem: $$\text{minimize } C_{\text{max}} \text{ such that, for all } k \ (1 \leq k \leq N), \ C_{\text{max}}^{(k)} \leq C_{\text{max}}^{(k)} \text{ local}$$ In this work, we also study the case where all organizations are interested locally in minimizing their average completion time while minimizing the global makespan. As in MOSP($C_{\text{max}}$), each organization imposes that the sum of completion times of its jobs can not be increased if compared with what the organization could have obtained using only its own machines ($\sum C_i^{(k)}$ local). We denote this problem MOSP($\sum C_i$) and the goal of this optimization problem is to: $$\text{minimize } C_{\text{max}} \text{ such that, for all } k \ (1 \leq k \leq N), \ \sum C_i^{(k)} \leq \sum C_i^{(k)} \text{ local}$$ 2.2 Selfishness In both MOSP($C_{\text{max}}$) and MOSP($\sum C_i$), while the global schedule might be computed by a central entity, the organizations keep control on the way they execute the jobs in the end. This property means that, in theory, it is possible for organizations to cheat the devised global schedule by re-inserting their jobs earlier in the local schedules. In order to prevent such behavior, we define a new restriction on the schedule, called \textit{selfishness restriction}. The idea is that, in any schedule respecting this restriction, no single organization can improve its local schedule by cheating. Given a fixed schedule, let $J_f^{(l)}$ be the first foreign job scheduled to be executed in $O^{(k)}$ (or the first idle time if $O^{(k)}$ has no foreign job) and $J_i^{(k)}$ any job belonging to $O^{(k)}$. Then, the \textit{selfishness restriction} forbids any schedule where $C_f^{(l)} < C_i^{(k)} - p_i^{(k)}$. In other words, $O^{(k)}$ refuses to cooperate if one of its jobs could be executed earlier in one of $O^{(k)}$ machines even if this leads to a larger global makespan. 3 Complexity Analysis 3.1 Lower Bounds Pascual et al. [12] showed with an instance having two organizations and two machines per organization that every algorithm that solves MOSP (for rigid, parallel jobs and $C_{\text{max}}$ as local objectives) has at least a $\frac{3}{2}$ approximation ratio when compared to the optimal makespan that could be obtained \textit{without the local constraints}. We show that the same bound applies asymptotically even with a larger number of organizations. Take the instance depicted in Figure 1a. $O^{(1)}$ initially has two jobs of size $N$ and all the others initially have $N$ jobs of size 1. All organizations contribute only with 1 machine each. The optimal makespan for this instance is $N + 1$ (Figure 1b), nevertheless it delays jobs from $O^{(2)}$ and, as consequence, does not respect MOSP’s local constraints. The best possible makespan that respects the local constraints (whenever the local objective is the makespan or the average completion time) is $\frac{3N}{2}$, as shown in Figure 1c. 3.2 Selfishness and Lower Bounds Although all organizations will likely cooperate with each other to achieve the best global makespan possible, their selfish behavior will certainly impact the quality of the best attainable global makespan. We study here the impact of new selfishness restrictions on the quality of the achievable schedules. We show that these restrictions impact MOSP($C_{\text{max}}$) and MOSP($\sum C_i$) as compared with unrestricted schedules and, moreover, that MOSP($C_{\text{max}}$) with selfishness restrictions suffers from limited performances as compared to MOSP($C_{\text{max}}$) with local constraints. \textbf{Proposition 1.} Any approximation algorithm for both MOSP($C_{\text{max}}$) and MOSP($\sum C_i$) has ratio greater than or equal to 2 regarding the optimal makespan without constraints if all organizations behave selfishly. Proof. We prove this result by using the example described in Figure 1. It is clear from Figure 1b that an optimal solution for a schedule without local constraints can be achieved in $N + 1$. However, with added selfishness restrictions, Figure 1a (with a makespan of $2N$) represents the only valid schedule possible. We can, therefore, conclude that local constraints combined with selfishness restrictions imply that no algorithm can provide an approximation ratio of 2 when compared with the problem without constraints. Proposition 1 gives a ratio regarding the optimal makespan without the local constraints imposed by MOSP. We can show that the same approximation ratio of 2 also applies for MOSP($C_{\text{max}}$) regarding the optimal makespan even if MOSP constraints are respected. **Proposition 2.** Any approximation algorithm for MOSP($C_{\text{max}}$) has ratio greater than or equal to $2 - \frac{2}{N}$ regarding the optimal makespan with local constraints if all organizations behave selfishly. ![Figure 2: Ratio between global optimum makespan with MOSP constraints and the makespan that can be obtained by MOSP($C_{\text{max}}$) with selfish organizations.](image) Proof. Take the instance depicted in Figure 2a, $O^{(1)}$ initially has $N$ jobs of size 1 and $O^{(N)}$ has two jobs of size $N$. The optimal solution that respects MOSP local constraints is given in Figure 2b and have $C_{\text{max}}$ equal to $N$. Nevertheless, the best solution that respects the selfishness restrictions is the initial instance with a $C_{\text{max}}$ equal to $2N - 2$. So, the ratio of the optimal solution with the selfishness restrictions to the optimal solution with MOSP constraints is $2 - \frac{2}{N}$. □ ### 3.3 Computational Complexity This section studies how hard it is to find optimal solutions for MOSP even for the simpler case in which all organizations contribute only with one machine and two jobs. We consider the decision version of the MOSP defined as follows: **Instance:** a set of $N$ organizations (for $1 \leq k \leq N$, organization $O^{(k)}$ has $m^{(k)}$ jobs, $m^{(k)}$ identical machines, and makespan as the local objective) and an integer $\ell$. **Question:** Does there exist a schedule with a makespan less than $\ell$? **Theorem 1.** MOSP($C_{\text{max}}$) is strongly NP-complete. **Proof.** It is straightforward to see that MOSP($C_{\text{max}}$) ∈ NP. Our proof is based on a reduction from the well-known 3-PARTITION problem [5]: **Instance:** a bound $B \in Z^+$ and a finite set $A$ of $3m$ integers $\{a_1, \ldots, a_{3m}\}$, such that every element of $A$ is strictly between $B/4$ and $B/2$ and such that $\sum_{i=1}^{3m} a_i = mB$. **Question:** can $A$ be partitioned into $m$ disjoint sets $A_1, A_2, \ldots, A_m$ such that, for all $1 \leq i \leq m$, $\sum_{a \in A_i} a = B$ and $A_i$ is composed of exactly three elements? Given an instance of 3-PARTITION, we construct an instance of MOSP where, for $1 \leq k \leq 3m$, organization $O^{(k)}$ initially has two jobs $J_1^{(k)}$ and $J_2^{(k)}$ with $p_1^{(k)} = (m + 1)B + 7$ and $p_2^{(k)} = (m + 1)a_k + 1$, and all other organizations have two jobs with processing time equal to 2. We then set $\ell$ to be equal to $(m + 1)B + 7$. Figure 3 depicts the described instance. This construction is performed in polynomial time. Now, we prove that $A$ can be split into $m$ disjoint subsets $A_1, \ldots, A_m$, each one summing up to $B$, if and only if this instance of MOSP has a solution with $C_{\text{max}} \leq (m + 1)B + 7$. Assume that $A = \{a_1, \ldots, a_{3m}\}$ can be partitioned into $m$ disjoint subsets $A_1, \ldots, A_m$, each one summing up to $B$. In this case, we can build an optimal schedule for the instance as follows: - for $1 \leq k \leq 3m$, $J_1^{(k)}$ is scheduled on machine $k$; - for $3m + 1 \leq k \leq 4m$, $J_1^{(k)}$ and $J_2^{(k)}$ are scheduled on machine $k$; - for $1 \leq i \leq m$, let $A_i = \{a_{i_1}, a_{i_2}, a_{i_3}\} \subseteq A$. The jobs $J_2^{(a_{i_1})}$, $J_2^{(a_{i_2})}$ and $J_2^{(a_{i_3})}$ are scheduled on machine $3m + i$. So, the global $C_{\text{max}}$ is $(m + 1)B + 7$ and the local constraints are respected. Conversely, assume that MOSP has a solution with $C_{\text{max}} \leq (m + 1)B + 7$. The total work ($W$) of the jobs that must be executed is $W = 3m((m + 1)B + 7) + 2 \cdot 2m + (m + 1)\sum_{i=1}^{3m} a_i + 3m = 4m(m + 1)B + 7$. Since we have exactly $4m$ organizations, the solution must be the optimal solution and there are no idle times in the scheduling. Moreover, $3m$ machines must execute only one job of size $(m + 1)B + 7$. W.l.o.g, we can consider that for $3m + 1 \leq k \leq 4m$, machine $k$ performs jobs of size less than $(m + 1)B + 7$. To prove our proposition, we first show two lemmas: Lemma 1. For all $3m + 1 \leq k \leq 4m$, at most four jobs of size not equal to 2 can be scheduled on machine $k$ if $C_{\text{max}}^{(k)} \leq (m+1)B + 7$. Proof. It is enough to notice that all jobs of size not equal to 2 are greater than $(m+1)B/4 + 1$, that $C_{\text{max}}$ must be equal to $(m+1)B + 7$ and that $m+1 > 3$. □ Lemma 2. For all $3m + 1 \leq k \leq 4m$, exactly two jobs of size 2 are scheduled on each machine $k$ if $C_{\text{max}}^{(k)} \leq (m+1)B + 7$. Proof. We prove this lemma by contradiction. Assume that there exists a machine $k$ such that at most one job of size 2 is scheduled on it. So, by definition of the size of jobs, all jobs scheduled in machine $k$ have a size greater than $(m+1)B/4 + 1$. By consequence of Lemma 1, since at most four jobs can be scheduled on machine $k$, the total work on this machine is $(m+1)B + y + 2$ where $y \leq 4$. This fact is in contradiction with the facts that there does not exist idle processing time and that $K = (m+1)B + 7$. □ Now, we construct $m$ disjoint subsets $A_1, A_2, \ldots, A_m$ of $A$ as follows: for all $1 \leq i \leq m$, $a_j$ is in $A_i$ if the job with size $(m+1)a_j + 1$ is scheduled on machine $3m+i$. Note that all elements of $A$ belong to one and only one set in $\{A_1, \ldots, A_m\}$. We prove that $A$ is a partition with desired properties. We focus on a fixed element $A_i$. By definition of $A_i$, we have that $$4 + \sum_{a_j \in A_i} ((m+1)a_j + 1) = (m+1)B + 7 \Rightarrow \sum_{a_j \in A_i} ((m+1)a_j + 1 = (m+1)B + 3$$ Since $m+1 > 3$, we have $\sum_{a_j \in A_i} (m+1)a_j = (m+1)B$. Thus, we can deduce that $A_i$ is composed of exactly three elements and $\sum_{a \in A_i} a = B$. □ ![Fig. 3: Reduction of MOSP($C_{\text{max}}$) from 3-Partition](image) We continue by showing that even if all organizations are interested locally in the average completion time, the problem is still NP-complete. We prove NP-completeness of the MOSP($\sum C_i$) problem (having a formulation similar to the MOSP($C_{\text{max}}$) decision problem) using a reduction from the PARTITION problem. The idea here is similar to the one used in the previous reduction, but the $\sum C_i$ constraints heavily restrict the allowed movements of jobs when compared to the $C_{\text{max}}$ constraints. **Theorem 2.** MOSP\((\sum C_i)\) is NP-complete. **Proof.** First, note that it is straightforward to see that MOSP\((\sum C_i) \in NP\). We use the PARTITION [5] problem to prove this theorem. **Instance:** a set of \(n\) integers \(s_1, s_2, \ldots, s_n\). **Question:** does there exist a subset \(J \subseteq I = \{1, \ldots, n\}\) such that \[ \sum_{i \in J} s_i = \sum_{i \in I \setminus J} s_i? \] Consider an integer \(M > \sum_i s_i\). Given an instance of the PARTITION problem, we construct an instance of MOSP\((\sum C_i)\) problem, as depicted in Figure 4a. There are \(N = 2n + 2\) organizations having two jobs each. The organizations \(O^{(2n+1)}\) and \(O^{(2n+2)}\) have two jobs with processing time 1. Each integer \(s_i\) from the PARTITION problem corresponds to a pair of jobs \(t'_i\) and \(t''_i\), with processing time equal to \(2^i M\) and \(2^i M + s_i\) respectively. We set \(J^{(k)}_1 = t'_{k}\), for all \(1 \leq k \leq n\) and \(J^{(k)}_1 = t''_{k-n}\), for all \(n + 1 \leq k \leq 2n\). We set \(K\) to \(\frac{W}{N} = \frac{\sum_i t'_i + \sum_i t''_i + 4}{2}\). To complete the construction, for any \(k\), \(1 \leq k \leq 2n\), the organization \(O^{(k)}\) has also a job \(J^{(k)}_2\) with processing time equal to \(K\). We set \(K\) to \(\ell\). This construction is performed in polynomial time and we prove that it is a reduction. First, assume that \(\{s_1, s_2, \ldots, s_n\}\) is partitioned into 2 disjoint sets \(S_1, S_2\) with the desired properties. We construct a valid schedule with optimal global makespan for MOSP\((\sum C_i)\). For all \(s_i\), if \(i \in J\), we schedule job \(t'_i\) in organization \(O^{(N)}\) and job \(t''_i\) in organization \(O^{(N-1)}\). Otherwise, we schedule \(t'_i\) in \(O^{(N-1)}\) and \(t''_i\) in \(O^{(N)}\). The problem constraints impose that organizations \(O^{(N-1)}\) and \(O^{(N)}\) will first schedule their own jobs (two jobs of size 1). The remaining jobs will be scheduled in non-decreasing time, using the Shortest Processing Time first (SPT) rule. This schedule respects MOSP’s constraints of not increasing the organization’s average completion time because each job is being delayed by at most its own size (by construction, the sum of all jobs scheduled before the job being scheduled is smaller than the size of the job). \(C^{(N-1)}_{\text{max}}\) will be equal to \(2 + \sum_i 2^i M + \sum_{i \in J} s_i\). Since \(J\) is a partition, \(C^{(N-1)}_{\text{max}}\) is exactly equal to \(C^{(N)}_{\text{max}} = 2 + \sum_i 2^i M + \sum_{i \in I \setminus J} s_i\). Also, \(C^{(N)}_{\text{max}} = C^{(N-1)}_{\text{max}} = K\), which gives us the theoretical lower bound for \(C_{\text{max}}\). Second, assume MOSP\((\sum C_i)\) has a solution with \(C_{\text{max}} \leq K\). We prove that \(\{s_1, s_2, \ldots, s_n\}\) is partitioned into 2 disjoint sets \(S_1, S_2\) with the desired properties. This solution of MOSP\((\sum C_i)\) has the structure drawn in Figure 4b. To achieve a \(C_{\text{max}}\) equal to \(K\), the scheduler must keep all jobs that have size exactly equal to \(K\) in their initial organizations. Moreover all jobs of size 1 must also remain in their initial organizations, otherwise these jobs would be delayed. The remaining jobs (all \(t'_i\) and \(t''_i\) jobs) must be scheduled either in organizations \(O^{(N-1)}\) or \(O^{(N)}\). Each processor must execute a total work of \(\frac{2K - 4}{2} = \frac{2 \sum_i 2^i M + \sum_i s_i}{2} = \sum_i 2^i M + \frac{\sum_i s_i}{2}\) to achieve a makespan equal to \(K\). Let \(J \subseteq I = \{1, \ldots, n\}\) such that \(i \in J\) if \(t''_i\) was scheduled on organization \(O^{(N-1)}\). \(O^{(N-1)}\) execute a total work of \(W^{(N-1)} = \sum_i 2^i M + \sum_{i \in J} s_i\), that must be equal to the total work of \(O^{(N)}\), \(W^{(N)} = \sum_i 2^i M + \sum_{i \in I \setminus J} s_i\). Since \(\sum_i s_i < \sum_i 2^i M\), we conclude that \(|J| = n\). $M$, we have $W^{(N-1)} \equiv \sum_{i \in J} s_i \pmod{M}$ and $W^{(N)} \equiv \sum_{i \in I \setminus J} s_i \pmod{M}$. This means that $W^{(N-1)} = W^{(N)} \implies (W^{(N-1)} \mod M) = (W^{(N)} \mod M) \implies \sum_{i \in J} s_i = \sum_{i \in I \setminus J} s_i$. If MOSP($\sum C_i$) has a solution with $C_{\text{max}} \leq K$, then set $J$ is a solution for PARTITION. ![Initial instance](a) ![Optimum](b) Fig. 4: Reduction of MOSP($\sum C_i$) from PARTITION ## 4 Algorithms In this section, we present three different heuristics to solve MOSP($C_{\text{max}}$) and MOSP($\sum C_i$). All algorithms present the additional property of respecting selfishness restrictions. ### 4.1 Iterative Load Balancing Algorithm The Iterative Load Balancing Algorithm (ILBA) [13] is a heuristic that redistributes the load from the most loaded organizations. The idea is to incrementally rebalance the load without delaying any job. First the less loaded organizations are rebalanced. Then, one-by-one, each organization has its load rebalanced. The heuristic works as follows. First, each organization schedules its own jobs locally and the organizations are enumerated by non-decreasing makespans, i.e. $C_{\text{max}}^{(1)} \leq C_{\text{max}}^{(2)} \leq \ldots \leq C_{\text{max}}^{(N)}$. For $k = 2$ until $N$, jobs from $O^{(k)}$ are rescheduled sequentially, and assigned to the less loaded of organizations $O^{(1)} \ldots O^{(k)}$. Each job is rescheduled by ILBA either earlier or at the same time that the job was scheduled before the migration. In other words, no job is delayed by ILBA, which guarantees that the local constraint is respected for MOSP($C_{\text{max}}$) and MOSP($\sum C_i$). ### 4.2 LPT-LPT and SPT-LPT Heuristics We developed and evaluated (see Section 5) two new heuristics based on the classical LPT (Longest Processing Time First [6]) and SPT (Smallest Processing Time First [2]) algorithms for solving MOSP($C_{\text{max}}$) and MOSP($\sum C_i$), respectively. Both heuristics work in two phases. During the first phase, all organizations minimize their own local objectives. Each organization starts applying LPT for its own jobs if the organization is interested in minimizing its own makespan, or starts applying SPT if the organization is interested in its own average completion time. The second phase is when all organizations cooperatively minimize the makespan of the entire grid computing system without worsening any local objective. This phase works as follows: each time an organization becomes idle, i.e., it finishes the execution of all jobs assigned to it, the longest job that does not have started yet is migrated and executed by the idle organization. This greedy algorithm works like a global LPT, always choosing the longest job yet to be executed among jobs from all organizations. ### 4.3 Analysis ILBA, LPT-LPT and SPT-LPT do not delay any of the jobs when compared to the initial local schedule. During the rebalancing phase, all jobs either remain in their original organization or are migrated to an organization that became idle at a preceding time. The implications are: - the *selfishness* restriction is respected – if a job is migrated, it will start before the completion time of the last job of the initial organization; - if organizations’ local objective is to minimize the makespan, migrating a job to a previous moment in time will decrease the job’s completion time and, as consequence, it will not increase the initial makespan of the organization; - if organizations’ local objective is to minimize the average completion time, migrating a job from the initial organization to another that became idle at a previous moment in time will decrease the completion time of all jobs from the initial organization and of the job being migrated. This means that the $\sum C_i$ of the jobs from the initial organization is always decreased; - the rebalancing phase of all three algorithms works as the list scheduling algorithms. Graham’s classical approximation ratio $2 - \frac{1}{N}$ of list scheduling algorithms [6] holds for all of them. We recall from Section 3.2 that no algorithm respecting selfishness restrictions can achieve an approximation ratio for MOSP($C_{\text{max}}$) better than 2. Since all our algorithms reach an approximation ratio of 2, no further enhancements are possible without removing selfishness restrictions. ## 5 Experiments We conducted a series of simulations comparing ILBA, LPT-LPT, and SPT-LPT under various experimental settings. The workload was randomly generated with parameters matching the typical environment found in academic grid computing systems [13]. We evaluated the algorithms with instances containing a random number of machines, organizations and jobs with different sizes. In our tests, the number of initial jobs in each organization follows a Zipf distribution with exponent equal to 1.4267, which best models virtual organizations in real-world grid computing systems [7]. We are interested in the improvement of the global $C_{\text{max}}$ provided by the different algorithms. The results are evaluated with comparison to the $C_{\text{max}}$ obtained by the algorithms with the well-known theoretical lower bound for the scheduling problem without constraints $LB = \max\left(\sum_{i,k} \frac{p_i^{(k)}}{m_i^{(k)}}, p_{\text{max}}\right)$. Our main conclusion is that, despite the fact that the selfishness restrictions are respected by all heuristics, ILBA and LPT-LPT obtained near optimal results for most cases. This is not unusual, since it follows the patterns of experimental behavior of standard list scheduling algorithms, in which it is easy to obtain a near optimal schedule when the number of tasks grows large. SPT-LPT produces worse results due to the effect of applying SPT locally. However, in some particular cases, in which the number of jobs is not much larger than the number of machines available, the experiments yield more interesting results. Figure 5 shows the histogram of a representative instance of such a particular case. The histograms show the frequency of the ratio $C_{\text{max}}$ obtained to the lower bound over 5000 different instances with 20 organizations and 100 jobs for ILBA, LPT-LPT and SPT-LPT. Similar results have been obtained for many different sets of parameters. LPT-LPT outperforms ILBA (and SPT-LPT) for most instances and its average ratio to the lower bound is less than 1.3. ![Histograms](image) Fig. 5: Frequency of results obtained by ILBA, LPT-LPT, and SPT-LPT when the results are not always near optimal. 6 Concluding Remarks In this paper, we have investigated the scheduling on multi-organization platforms. We presented the MOSP($C_{\text{max}}$) problem from the literature and extended it to a new related problem MOSP($\sum C_i$) with another local objective. In each case we studied how to improve the global makespan while guaranteeing that no organization will worsen its own results. We showed first that both versions MOSP($C_{\text{max}}$) and MOSP($\sum C_i$) of the problem are NP-hard. Furthermore, we introduced the concept of *selfishness* in these problems which corresponds to additional scheduling restrictions designed to reduce the incentive for the organizations to cheat locally and disrupt the global schedule. We proved that any algorithm respecting selfishness restrictions can not achieve a better approximation ratio than 2 for MOSP($C_{\text{max}}$). Two new scheduling algorithms were proposed, namely LPT-LPT and SPT-LPT, in addition to ILBA from the literature. All these algorithms are list scheduling, and thus achieve a 2-approximation. We provided an in-depth analysis of these algorithms, showing that all of them respect the selfishness restrictions. Finally, all these algorithms were implemented and analysed through experimental simulations. The results show that our new LPT-LPT outperforms ILBA and that all algorithms exhibit near optimal performances when the number of jobs becomes large. Future research directions will be more focused on game theory. We intend to study schedules in the case where several organizations secretly cooperate to cheat the central authority. References 1. Baker, B.S., Coffman, Jr., E.G., Rivest, R.L.: Orthogonal packings in two dimensions. SIAM Journal on Computing 9(4), 846–855 (Nov 1980) 2. Bruno, J.L., Coffman, Jr., E.G., Sethi, R.: Scheduling independent tasks to reduce mean finishing time. Communications of the ACM 17(7), 382–387 (Jul 1974) 3. Caragiannis, I., Flammini, M., Kaklamanis, C., Kanellopoulos, P., Moscardelli, L.: Tight bounds for selfish and greedy load balancing. In: Proceedings of the 33rd International Colloquium on Automata, Languages and Programming, Lecture Notes in Computer Science, vol. 4051, pp. 311–322. Springer Berlin (Jun 2006) 4. Even-Dar, E., Kesselman, A., Mansour, Y.: Convergence time to nash equilibria. ACM Transactions on Algorithms 3(3), 32 (Aug 2007) 5. Garey, M.R., Johnson, D.S.: Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman (Jan 1979) 6. Graham, R.L.: Bounds on multiprocessing timing anomalies. SIAM Journal on Applied Mathematics 17(2), 416–429 (Mar 1969) 7. Iosup, A., Dumitrescu, C., Epema, D., Li, H., Wolters, L.: How are real grids used? The analysis of four grid traces and its implications. In: 7th IEEE/ACM International Conference on Grid Computing, pp. 262–269 (Sep 2006) 8. Jansen, K., Otte, C.: Approximation algorithms for multiple strip packing. In: Proceedings of 7th Workshop on Approximation and Online Algorithms (WAOA). Copenhagen, Denmark (Sep 2009) 9. Koutsoupias, E., Papadimitriou, C.: Worst-case equilibria. In: Proceedings of 16th Annual Symposium on Theoretical Aspects of Computer Science. LNCS, vol. 1563, pp. 404–413. Springer Berlin, Trier, Germany (Mar 1999) 10. Nisam, N., Roughgarden, T., Tardos, E., Vazirani, V.V.: Algorithmic Game Theory. Cambridge University Press (Sep 2007) 11. Ooshita, F., Izumi, T., Izumi, T.: A generalized multi-organization scheduling on unrelated parallel machines. In: International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT). pp. 26–33. IEEE Computer Society, Los Alamitos, CA, USA (Dec 2009) 12. Pascual, F., Rzadca, K., Trystram, D.: Cooperation in multi-organization scheduling. In: Euro-Par 2007 Parallel Processing, Lecture Notes in Computer Science, vol. 4641/2007, pp. 224–233. Springer Berlin (Aug 2007) 13. Pascual, F., Rzadca, K., Trystram, D.: Cooperation in multi-organization scheduling. Concurrency and Comp.: Practice & Experience 21(7), 905–921 (May 2009) 14. Schwiegelshohn, U., Tchernykh, A., Yahyapour, R.: Online scheduling in grids. In: IEEE International Symposium on Parallel and Distributed Processing (IPDPS). pp. 1–10 (Apr 2008) 15. Ye, D., Han, X., Zhang, G.: On-line multiple-strip packing. In: Berlin, S. (ed.) Proceedings of the 3rd International Conference on Combinatorial Optimization and Applications. LNCS, vol. 5573, pp. 155–165 (Jun 2009) 16. Zhuk, S.N.: Approximate algorithms to pack rectangles into several strips. Discrete Mathematics and Applications 16(1), 73–85 (Jan 2006)
Empirical tax research in accounting Douglas A. Shackelford\textsuperscript{a,}\textsuperscript{*}, Terry Shevlin\textsuperscript{b} \textsuperscript{a}Kenan-Flagler Business School, University of North Carolina, McColl Building, Campus Box 3490, Chapel Hill, NC 27599-3490, USA \textsuperscript{b}School of Business Administration, University of Washington, Seattle, WA 98195-3200, USA Received 14 October 1999; received in revised form 21 February 2001 Abstract This paper traces the development of archival, microeconomic-based, empirical income tax research in accounting over the last 15 years. The paper details three major areas of research: (i) the coordination of tax and non-tax factors, (ii) the effects of taxes on asset prices, and (iii) the taxation of multijurisdictional (international and interstate) commerce. Methodological concerns of particular interest to this field also are discussed. The paper concludes with a discussion of possible directions for future research. © 2001 Elsevier Science B.V. All rights reserved. JEL classification: M41; H25; K34; G32; F23 Keywords: Taxes; Empirical tax research; Non-tax costs; Financial reporting costs; Tax capitalization 1. Introduction Tax research has long attempted to address three questions of scholarly and policy interest: Do taxes matter? If not, why not? If so, how much? Current tax research in accounting addresses these questions using a framework developed \textsuperscript{*}Corresponding author. Tel.: +1-919-962-3197; fax: +1-919-962-0054. E-mail address: firstname.lastname@example.org (D.A. Shackelford). by Scholes and Wolfson (SW, 1992).\(^1\) This paper traces the genesis of the framework and its influence on the development of archival, microeconomic-based, empirical tax research in accounting over the last 15 years. It is intended to serve as a historical record, an introduction for doctoral students and other interested parties, and a guide for identifying important unresolved issues in the literature. Although tax research has a long history in economics and finance and many accounting practitioners specialize in tax planning and compliance, accounting academe was slow to adopt taxes as an important area of inquiry. Besides empirical inventory costing studies (e.g., Ball, 1972; Dopuch and Ronen, 1973; Sunder, 1973, 1975), tax research by accountants before the mid-1980s could be dichotomized into two lines: (a) legal research, evaluating the effects of taxes on exogenous transactions, usually published in law journals, and (b) policy studies, evaluating the distributional or efficiency effects of taxes, usually published in public economics journals. Few tax papers were published in general interest accounting journals. Although seminal studies in corporate finance, many of which examined tax issues (e.g., Modigliani and Miller, 1963), influenced financial accounting research, they did not similarly affect tax research in accounting. By the mid-1980s, finance was losing interest in tax research. Myers (1984, p. 588) expressed finance’s frustration with empirical tax studies in his presidential address, “I know of no study clearly demonstrating that a firm’s tax status has predictable, material effects on its debt policy. I think the wait for such a study will be protracted.” Scholes, a finance professor, and Wolfson, an accounting professor, responded by adopting a microeconomic perspective to analyze settings where taxes were likely important. The Scholes–Wolfson paradigm does not advance new theories or methodology. It focuses on neither detailed legal aspects nor policy recommendations. Rather it adopts a positive approach in an attempt to explain the role of taxes in organizations. Drawing extensively from corporate finance and public economics, it merges two distinct bodies of knowledge: microeconomics and tax law. The paradigm is central to current empirical tax research in accounting, important in public economics, and somewhat influential in corporate finance. Its conceptual framework is developed around three central themes (known as all parties, all taxes, and all costs), none of which is particularly novel or counterintuitive: - “Effective tax planning requires the [tax] planner to consider the tax implications of a proposed transaction for all of the parties to the transaction. \(^1\)Scholes et al. (2001) is an updated, second edition of SW. Effective tax planning requires the planner, in making investment and financing decisions, to consider not only explicit taxes (tax dollars paid directly to taxing authorities) but also implicit taxes (taxes that are paid indirectly in the form of lower before-tax rates of return on tax-favored investments). Effective tax planning requires the planner to recognize that taxes represent only one among many business costs, and all costs must be considered in the planning process: to be implemented, some proposed tax plans may require exceedingly costly restructuring of the business.” (SW, p. 2). An example of all parties is considering both employer and employee taxes when structuring compensation. An example of all taxes is a municipal bond, which carries a lower interest rate because its interest is tax-exempt. An example of all costs is the tradeoff between corporate financial accounting and tax objectives. The three themes—all parties, all taxes, and all costs—provide a structure for tax management that achieves organizational goals, such as profit or wealth maximization. The themes imply that tax minimization is not necessarily the objective of effective tax planning. Instead, effective tax planning must be evaluated in the efficient design of organizations and through adoption of a contractual perspective. The paradigm implicitly assumes that if all contractual parties, all taxes (explicit and implicit), and all non-tax costs can be identified and controlled, then the observed tax behavior will be rational and predictable. Typically, the quality of research in this area is evaluated based on whether the research design identifies and controls for all parties, all taxes, and all costs. The paradigm is so widely accepted in accounting that differences between predicted and actual are attributed to unspecified exclusion of an important party, tax, or non-tax cost. Contrary evidence is presumed to reflect model misspecification or measurement error. No paper challenges the validity of the SW framework. The three themes, while providing an excellent analytical structure, are less effective for constructing rigorous tests. Because the framework operates as maintained hypotheses (similar to utility or firm value maximization), any finding can be characterized as consistent with the theory because non-tax costs, such as financial reporting considerations, are difficult to quantify. To illustrate, suppose an accounting choice (e.g., accruals) is believed to be jointly determined by tax and financial reporting factors, neither of which is perfectly observable. If empirical tests reveal that taxes are an important consideration, then the finding will be interpreted as evidence that financial reporting considerations are insufficiently important to affect taxes. If empirical tests reveal that taxes are not an important consideration, then the finding will be interpreted as evidence that financial reporting considerations overwhelm tax considerations in this setting. Despite its shortcomings, the framework accounts for the recent surge in tax research in accounting.\footnote{To calibrate the framework’s influence and recency, we reviewed the \textit{Journal of Accounting and Economics}, \textit{Journal of Accounting Research}, and \textit{The Accounting Review} for papers that include the word “tax” or any variant in their titles. The percentage of archival, empirical papers so entitled increased from 2 percent of all publications in the 1970s and 1980s to 7 percent in the 1990s. Excluding papers addressing accounting for income taxes, recent papers invariably cite Scholes and Wolfson or research referencing their framework.} Tax now rivals managerial accounting and auditing for second billing in the research community after financial accounting. The most active researchers in this area are well-trained empiricists with an understanding of tax law. Newly minted accounting doctoral students who combine professional tax experience with an understanding of microeconomics and finance are ideally situated to adopt the new tax perspective. An appreciation of the nuances of the tax law stands as a substantial barrier to entry for many accounting researchers, particularly in the more technically challenging areas, such as international tax and mergers and acquisitions. Most of the research is best described as documentation. In the early years of the framework, the demand for documentation was clear. For example, Scholes and Wolfson (1987) state, “What is most lacking in the literature at the moment is a documentation of the facts.” The literature is slowly shifting from documentation to explanation, understanding, and prediction, an evolution that is critical to the field’s advancement. Quasi-experimental opportunities (e.g., changes in the tax law) and data availability have directed tax research more than hypothesis testing of competing theories. In particular, the development of the framework coincided with passage of the Tax Reform Act of 1986 (TRA 86), which overhauled the US tax system. Many tax studies applied the framework to examine the economic effects of TRA 86 (e.g., Collins and Shackelford (1992), Matsunaga et al. (1992), and Scholes et al. (1992) among many others). At first, the empirical tax papers built on SW alone. Instead of a trunk with major branches, the tax literature grew like a wild bush, springing in many directions from the SW root. In recent years, at least three major areas of inquiry (tax and non-tax tradeoffs, taxes and asset prices, and multijurisdictional) have emerged. This review evaluates these three areas of greatest development in the hope that understanding the progress in these areas may provide insights into the factors that promote the production of empirical tax research in accounting. For example, current working papers in international tax reflect a much higher quality than the studies that were published in the early 1990s. The advances are attributable to improvements in theory, data, and research design. Similar improvements are evident in research evaluating the coordination of taxes and financial reporting considerations and in recent studies of implicit taxes (also known as tax capitalization) that attempt to quantify the effect of taxes on asset prices. Our challenge in this paper is to delineate tax research in accounting from tax research in other fields and from other types of accounting research. The multidisciplinary nature of taxes means that tax accountants often conduct microeconomics-based empirical research with non-accounting tax researchers (e.g., Scholes and Wolfson’s joint work) and with non-tax accounting researchers, particularly financial accountants. It is also not unusual for the work to be published in economics journals (e.g., *Journal of Public Economics* and *National Tax Journal*) and leading finance journals. Thus, defining tax research in accounting becomes imprecise at best. To the extent possible, we have attempted to address this issue by concentrating on areas where accountants have made the greatest contribution to academe’s understanding of taxes. For example, accountants have concentrated almost solely on income tax research. This focus likely reflects both the centrality of income measurement in the field of accounting and the historical emphasis on income tax consulting by tax accountants. However, the lines are blurring as tax accountants increasingly contribute to the broader academic field of taxes. By importing mainstream accounting research concerns (e.g., the role of earnings) into tax analyses, where accounting topics traditionally have been ignored, tax accountants are tilting tax inquiries toward longstanding accounting issues. In short, the body of knowledge produced in recent years by tax accountants has influenced both accounting research, infusing it with a tax perspective, and tax research, infusing it with an accounting perspective. Finally, besides the usual scholarly demand for understanding, the demand for microeconomics-based tax research in accounting is fueled partly by the popularity of the research in the classroom. An indication of the research-teaching link is the fact that the seminal work in the field (SW) is an MBA textbook. In the SW preface, Scholes and Wolfson attribute the framework to a frustration with the existing tax teaching materials. Later through funding by the Ernst and Young Foundation, their course was taught to several hundred accounting (mostly tax) faculty in the late 1980s and early 1990s. Variants of the tax class are among the most popular MBA electives at many business schools. This unusually strong synergy between teaching and research in the tax area creates a demand for research that can be easily transformed into pedagogical materials (e.g., case studies). The next three sections concentrate on three major areas of tax research in accounting. Section 2 discusses studies that address the coordination of tax and non-tax factors. Section 3 details research linking asset prices and taxes. Section 4 reviews investigations of the taxation of multijurisdictional (international and interstate) commerce. Empirical tax research in accounting suffers from the research design limitations that are common to all empirical work (e.g., model specification, data limitations, measurement error, among others). Rather than provide detailed criticisms of each individual paper in Sections 2–4. Section 5 discusses six general methodological concerns that are particularly applicable to tax research in accounting. Closing remarks follow. 2. Tax and non-tax tradeoffs The largest body of tax research in accounting examines the coordination of taxes and other factors in business decisions. The tension surrounding these papers is that taxes cannot be minimized without affecting other organizational goals. Although these studies address each of the three questions of tax research (Do taxes matter? If not, why not? If so, how much?), they focus mostly on the second question, explaining why tax minimization might not be the optimal business strategy. Of the three themes of the framework (all parties, all taxes, and all costs), these papers rely heavily on “all costs”, i.e., understanding taxes requires understanding non-tax factors. Some papers reflect “all parties”, i.e., a multilateral contracting perspective, but the “all taxes” theme is generally ignored in these studies. This review of the tradeoff literature is dichotomized into papers that address the interaction of financial reporting and tax factors and papers that examine the effects of agency costs on tax minimization. The papers cover a wide range of settings including inventory, compensation, and tax shelters. Although it is difficult to summarize a large literature, common themes in these papers are: - taxes are not a cost that taxpayers inevitably avoid; - tax management is complex and involves many dimensions of business; - the effects of financial reporting considerations on taxes is better understood than the effects of agency costs; - quantification of non-tax costs has progressed slowly. 2.1. Financial reporting considerations This section focuses on one non-tax factor of particular interest to the readership, financial reporting incentives. At the risk of oversimplification, financial reporting costs are those costs, real or perceived, related to reporting lower income or shareholders’ equity. These costs are well discussed in the earnings management literature covered elsewhere in this issue. They are important to effective tax planning because tax-minimizing strategies often result in lowering reported income. Many financial contracts with creditors, lenders, customers, suppliers, managers, and other stakeholders use accounting numbers to specify the terms of trade, influencing managers’ willingness to report lower income. Thus, many choices in accounting, financing, marketing, production, and other business functions involve weighing the tax incentives to lower taxable income against the financial reporting incentives to increase book income. Although tax accounting and financial accounting often differ in revenue recognition and other important concerns, tax plans often result in reporting lower book income. Thus, it is not surprising that tax planning affects financial accounting choices and that financial accounting considerations affect tax plans. In fact, tax accountants have contributed to the multidisciplinary field of taxes by demonstrating the extent to which financial reporting considerations affect tax choices. Likewise, tax researchers have contributed to accounting research by demonstrating that tax considerations often affect accounting choices. The remainder of this section reviews several research settings used to calibrate book and tax tradeoffs in an attempt to answer the question, “What is known about the relation between financial accounting considerations and tax considerations?” In short, the literature suggests that financial accounting management and tax management are not independent and neither consideration consistently dominates the other in decision-making. A key implication from these studies is that financial accounting considerations may be an important omitted correlated variable in tax studies, and tax considerations may be an important omitted correlated variable in financial accounting studies. Finally, as detailed in Section 5, empirical tax researchers face a number of methodological issues. We briefly overview some of them here to set up our discussion of individual papers. Empirical tax researchers examining corporate behavior generally require an estimate of a firm’s marginal tax rate. Unless otherwise explicitly noted, the studies discussed below proxy firms’ tax status with a dummy variable equal to 1 if the firm has a net operating loss carryforward (NOL) and 0 otherwise. We argue in Section 5 that this variable measures a firm’s marginal tax rate with error and thus caution must be exercised in interpreting results based on the NOL dummy variable. We discuss an alternative approach based on repeated simulations of firms’ future taxable income (Shevlin, 1990; Graham, 1996a, b). A second problem facing some studies is that the outcomes of choices are examined with the choice being treated as exogenous—a self-selection problem. Even in studies that model the choice, often the researchers must make assumptions about what the firm’s economic balance sheet, income statement and taxable income would be if the alternative choice were made. This is commonly known as as-if calculations. Such calculations often unavoidably bias the findings in favor of the alternative hypothesis. The papers discussed below mostly recognize this problem and conduct sensitivity analyses to determine the extent of the bias. We highlight such papers in our discussion. 2.1.1. Inventory accounting Research addressing the tension between tax and book incentives can be traced to numerous studies evaluating the LIFO conformity requirement in the 1970s before SW. This literature grew out of interest in two questions. First, do stock prices change in an efficient or unsophisticated manner at releases of information about LIFO adoptions? If managers are sophisticated, then an LIFO adopter would experience declines in both reported earnings and the present value of corporate taxes (Ball, 1972; Sunder, 1973, 1975; Ricks, 1986). In such a setting, it was argued that a functional fixation view of investors would predict that LIFO adopters would experience negative stock price changes when the lower LIFO-based earnings were announced. In contrast, an efficient market view of investors predicts they would disregard the lower book earnings and value the LIFO tax benefits so that LIFO adopters would experience positive stock price changes at adoption announcements. On balance, the empirical results of investigations into LIFO adoption announcements during the 1970s and 1980s were inconclusive and puzzling. Researchers found little evidence of a positive mean excess stock return at the initial disclosure of actual or potential LIFO adoptions. Lanen and Thompson (1988) model the stock price reaction to a voluntary accounting change, such as LIFO adoption. They show that if investors rationally anticipate voluntary accounting changes, then the sign of the association between the stock price reaction at the announcement date and firm-specific characteristics (measuring the expected cash flow effects of the change) are difficult to predict. Later Kang (1993) argued that LIFO adoptions should be accompanied by negative stock returns because the decision to adopt LIFO is rational if a firm on FIFO sees unexpectedly higher future inflation for its input prices. In other words, the adoption of LIFO signals optimizing in the face of unexpectedly bad news about long-term input price inflation. Hand (1993) tested Kang’s theory using firms that announced they were considering adopting LIFO and then resolved that uncertainty by either adopting LIFO or remaining on FIFO. Hand’s results, after including controls for Lanen and Thompson’s arguments on prior probability of adoption, were broadly consistent with the major predictions of the Kang model. In particular, firms that resolved the LIFO adoption uncertainty by adopting LIFO (remaining on FIFO) experienced reliably negative (positive) mean excess returns at the resolution of uncertainty date. Thus, Kang and Hand appear to have provided a reasonable explanation for the earlier empirical findings of a negative stock price reaction to the announcement of LIFO adoption. The second question in the LIFO studies concerns whether managers choose the inventory accounting method that minimizes the present value of the firm’s current and expected future tax payments or avoid LIFO because its use lowers reported earnings in the short term. Many studies find that taxes are a primary consideration in inventory costing (e.g., Dopuch and Pincus, 1988; Cushing and LeClerc, 1992). After reviewing the literature, Jenkins and Pincus (1998) conclude that tax savings dominate earnings management concerns when firms adopt LIFO. Several papers have examined the role of tax and non-tax factors in inventory management by LIFO firms. Firms can increase reported earnings by liquidating LIFO layers but at a tax cost because taxable income also increases. Firms can decrease reported earnings and taxes by additional year-end purchases at higher prices.\(^3\) Dhaliwal, Frankel, and Trezevant (DFT, 1994) find that both tax and financial reporting factors affect LIFO liquidations. Liquidations are larger and more common for low-tax firms (measured as the existence of an NOL carryforward) and more likely to occur when earnings changes are negative and firms have greater leverage. Also measuring taxes by the existence of an NOL carryforward, Frankel and Trezevant (1994) find that taxes affect LIFO firms’ year-end purchasing behavior, but financial reporting considerations do not. Hunt, Moyer, and Shevlin (HMS, 1996) do not find that taxes affect inventory decisions of LIFO firms. Recognizing inventory management as one of many options LIFO firms can employ to manage taxes and earnings, they incorporate LIFO inventory management together with current and non-current accruals in a cost minimization model (based on a model developed by Beatty et al., 1995b). Although HMS’s financial reporting results concur with DFT, their tax results do not. Sensitivity tests attribute the difference to HMS’s using a system of equations and employing a more sophisticated measure of a firm’s tax status. Using a system of equations allows for simultaneity among the three choice variables HMS study but requires the researcher to make assumptions about which exogenous variables to include in each model. It is necessary to have at least one different exogenous variable in each regression model to identify (estimate) the system. These choices are sometimes somewhat arbitrary and results in simultaneous equations can be sensitive to which variables are included and excluded in each regression. HMS use the simulation approach to estimate each firm’s marginal tax rate. We believe that while the simulation approach is not without its own problems, it provides a superior measure of firms’ marginal tax rates. Thus, when results differ between studies using an NOL dummy variable and the simulation estimate, we attach more credence to the simulation-based results. \(^3\) Bowen and Pfeffer (1989) discuss the year-end decision facing LIFO firms and illustrate the issue with Farmer Brothers, a company that roasts and packages coffee for the restaurant industry, which faced large input price increases in 1976–1977 after a severe freeze in the coffee growing regions of Brazil. The final LIFO choice facing a firm is LIFO abandonment. Johnson and Dhaliwal (1988) examine the tradeoff between taxes and financial statement effects in the LIFO abandonment decision. Consistent with abandonment increasing taxes and lowering financial reporting costs, they find abandonment firms are more leveraged, closer to violating working capital covenants, and have larger NOL carryforwards. Additional tests regress the disclosed tax costs of abandonment ($7.8 million on average) on financial statement variables. These tests are particularly intriguing because they use actual firm estimates of the tax costs to test the tradeoffs between tax and other factors. After analyzing 22 firms closely, Sweeney (1994) finds that despite financial reporting benefits, firms will not switch to FIFO if the change generates “significant” tax costs.\(^4\) Overall, we conclude that taxes are an important determinant (have a first-order effect) in firms’ decisions to adopt LIFO, in LIFO liquidations, and in LIFO abandonment. However, we believe that the evidence in HMS Hunt et al. (1996) suggests that taxes are far less important than financial reporting considerations for firms wishing to manage earnings through LIFO inventory management. ### 2.1.2. Compensation Compensation is another business cost affected by both tax and financial reporting incentives. Several papers have examined the role of taxes in the choice between firms issuing incentive (or qualified, ISOs) and non-qualified employee stock options (NQOs). On an aggregate usage level, the relative use of ISOs and NQOs has changed over time, consistent with changes in the tax laws favoring one or the other option type. For example, Hite and Long (1982) report that firms switched from ISOs to NQOs after the top individual tax rates were lowered in the Tax Act of 1969 (making ISOs less tax favored relative to NQOs). Similarly, the Tax Reform Act of 1986 reduced the attractiveness of ISOs considerably because not only was the top individual rate set below the top corporate rate but the capital gains rate was set equal to the tax rate on --- \(^4\)Another line of research has examined the value relevance of the LIFO reserve. Initial research predicted a positive association between firm value and the LIFO reserve because the LIFO reserve is the difference between current cost and old costs of inventory (FIFO cost – LIFO cost) and is thus expected to represent an asset. Guenther and Trombley (1994) and Jennings et al. (1996) document a negative association between the LIFO reserve and firm market value of equity. These authors develop a price elasticity argument to explain the negative association: if the LIFO reserve provides information to investors about a firm’s future input price increase, the negative association is then consistent with investors expecting firms cannot on average raise output prices by a similar amount. Both papers provide evidence consistent with this explanation. Dhaliwal et al. (2000) provide an alternative explanation. They add the LIFO reserve to FIFO inventory and tax-adjust the LIFO reserve arguing that the tax-adjusted LIFO reserve is an estimate of the deferred tax liability arising from future LIFO liquidations. Thus, they predict and observe (both before and after controlling for the firm’s ability to pass on input price increases) a negative association between the tax-adjusted LIFO reserve and the market value of equity. ordinary income.\textsuperscript{5} Balsam et al. (1997) document that NQO usage increased relative to ISOs after 1986. However, papers that examine firm-specific usage of ISOs and NQOs as a function of corporate and individual tax rates fail to find results consistent with their tax predictions. For example, Madeo and Omer (1994) report that firms that switched from ISOs to NQOs following the 1969 Tax Act tended to be firms with low tax rates, when from a purely tax viewpoint, the high-tax firms should be the ones switching. Austin et al. (1998) report that the firm’s marginal tax rate (estimated using the simulation approach) appears to have played little role in the choice of option type during the 1981–1984 period, with the choice appearing to be driven by minimizing the executives’ tax burden. Thus the extant evidence is somewhat mixed on the role of taxes in the choice between ISOs and NQOs and, if we were forced to make a judgment on the current state of knowledge, we would interpret the evidence as consistent with taxes not being an important determinant of individual firm’s choice between ISOs and NQOs. Using the framework’s “all parties” approach, Matsunaga, Shevlin, and Shores (MSS, 1992) examine a setting where employers tradeoff the tax benefits of a corporate deduction for compensation with the financial reporting costs of lower earnings arising from transaction costs. Specifically, they investigate the response to TRA 86’s tax rate changes that reduced the tax advantages of ISOs relative to NQOs. One possible response for employees holding ISOs is to exercise them and sell the stock within 12 months of exercise, resulting in a disqualifying disposition. A disqualifying disposition automatically converts ISOs into NQOs. Disqualification generates ordinary taxable income for the individual and transaction costs for both employee and employer, with the transaction costs to the employer reducing book earnings.\textsuperscript{6} The negatives must be balanced against the tax savings of a compensation deduction for the firm. MSS analyze the tradeoffs by holding employees indifferent and computing the net tax benefits for employers (using the simulation approach to estimate each firm’s marginal tax rate). Consistent with firms coordinating taxes and financial reporting, MSS find that disqualification is more common among firms facing fewer financial reporting constraints. They estimate that firms without disqualifications avoided roughly a 2.3 percent reduction in reported earnings, on average, at a mean cost of net tax benefits of $0.6 million. Because of data limitations MSS are required to make assumptions (discussed explicitly \textsuperscript{5}TRA 86 lowered the maximum statutory corporate tax rate from 46 to 34 percent and the maximum statutory personal tax rate from 50 to 28 percent while increasing the maximum statutory personal long-term capital gains tax rate from 20 to 28 percent. \textsuperscript{6}A firm’s transaction costs arise from compensating employees for their transaction costs associated with disqualifying the ISO and for the employee’s incremental taxes triggered by the disqualification. in their paper) to estimate both the tax benefits and financial reporting consequences of a disqualifying disposition, which unavoidably bias them toward finding in favor of the alternative hypothesis. For firms that did not disqualify, as-if numbers are required. This creates a problem common to many studies, both tax and non-tax (for example, pre-managed earnings in earnings management studies), and the results. The inferences must be interpreted cautiously in the light of the assumptions underlying the as-if calculations. Pensions are another form of compensation that has attracted book–tax analysis. Pension contributions reduce taxable income while pension expense reduces book income. Francis and Reiter (1987) test whether the level of pension funding varies with tax incentives to overfund and financial reporting incentives to underfund. They find funding levels are increasing in marginal tax rates and decreasing in financial reporting costs (measured by leverage). Examining similar issues, Thomas (1988) focuses on taxes while controlling for financial reporting effects via sample selection and inclusion of profitability and leverage variables. His results are generally consistent with Francis and Reiter (1987). Thomas (1989) and Clinch and Shibano (1996) explore whether taxes motivate termination of overfunded defined benefit pension plans. Thomas concludes that firms seem more motivated by cash needs than by taxes (measured by an NOL carryforward variable) whereas Clinch and Shibano, using a more sophisticated approach to estimating expected tax benefits, report results consistent with taxes playing an important role in the decision and timing of pension plan terminations.\(^7\) Both studies indicate that financial reporting considerations are a second-order motivation for plan terminations. Mittelstaedt (1989) also ignores financial reporting issues in examining pension asset reversions (either through reduced contributions or plan terminations). The results of the papers that omit financial reporting considerations must be interpreted with caution because of concerns with correlated omitted variables. Nevertheless, the evidence is consistent with taxes being an important determinant of firm’s funding policy and also of pension termination decisions when more sophisticated techniques are used to estimate tax effects of the termination. Finally, deferred compensation would appear to be a particularly useful setting for investigating both tradeoffs and agency costs. However, to date, no empirical evidence has applied the SW framework to document the tax and non-tax factors that determine deferred compensation. We look forward to such an analysis. \(^7\)Researchers should seriously consider the approach taken by Clinch and Shibano when examining decisions with large dollar effects that might invalidate an approach based on a marginal tax rate estimate. 2.1.3. Intertemporal income shifting Passed in 1986, TRA 86 phased-in tax rate reductions through 1988 (e.g., for calendar year companies the maximum regular tax rate fell from 46 percent in 1986 to 40 percent in 1987 and 34 percent in 1988). This precommitment to lower rates enabled tax managers to plan, knowing that rates were falling. This provided a powerful setting to assess firms’ willingness to obtain tax savings by deferring earnings. Scholes et al. (1992) report that larger companies are more active income shifters. They acknowledge that financial reporting considerations likely impede shifting income into future periods, but their research design does not include any measures designed to capture these incentives. Guenther (1994a) extends Scholes et al. (1992) to include proxies for financial reporting costs. He confirms that large firms shift more but adds that firms with higher leverage ratios (a proxy for financial reporting costs) are less willing to report lower income. Thus, shifting income to save taxes appears coordinated with managing debt covenant violation costs. Lopez et al. (1998) extend Guenther (1994a) to report that income shifting is concentrated among firms that exhibited prior tax aggressiveness (as measured using the tax subsidy measure from Wilkie and Limberg, 1993). The rate reductions in TRA 86 also provided an incentive to maximize NOL carrybacks to years before rates fell (e.g., 1986). Maydew (1997) tests for NOL-induced income shifting using leverage to measure financial reporting costs. He estimates firms with NOL incentives to carryback losses shifted $2.6 billion less operating income because of costs associated with increasing leverage. This compares with total shifting of $27.2 billion of income, showing the restraints from financial reporting considerations were substantial. While its rate reduction was providing incentives to shift income from 1986 to later years, TRA 86’s alternative minimum tax provided incentives to shift book income back to 1986 or forward, beyond 1989. From 1987 to 1989, book income was a component of taxable income for firms subject to the AMT. This direct link between book and tax provided an unusually powerful setting for calibrating the exchange rate between book earnings and taxable income. Several studies estimate the AMT impact on reported earnings. Gramlich (1991) finds the AMT exerted downward pressure on firm earnings. He adds that firms shifted book earnings from 1987 to 1986 to avoid taxes. Using actual tax returns to identify AMT firms, Boynton et al. (1992) confirm income shifting. However, their study omits controls for financial reporting incentives. Dhaliwal and Wang (1992), Manzon (1992), and Wang (1994) concur with Gramlich (1991) that firms shifted income from 1987 to 1986. The AMT book income adjustment studies illustrate several common problems facing archival empiricists. The studies use a treatment/control group approach which, besides any possible self-selection problems discussed in Section 5.2 below, requires the researcher to identify firms likely/not likely to be affected. Some studies use ex-ante identification while others use ex-post (firms report they paid the AMT). Both approaches are problematic, as discussed by Choi et al. (1998). Further, the treatment firms are compared with control firms that have alternative income shifting incentives because of the contemporaneous change in corporate statutory tax rates. Finally, as recognized by Manzon (1992) the treatment firms vary in their incentives because the effective AMT tax rate varies cross-sectionally. Thus, Choi et al. (1998) contend on methodological grounds that little evidence supports AMT-driven income shifting and we concur with their contention. Finally, to our knowledge, no study jointly evaluates the rate reduction incentives to realize income after 1986 and the AMT incentives to realize income in 1986. 2.1.4. Capital structure, divestitures, and asset sales Engel, Erickson, and Maydew (EEM, 1999) analyze an unusual security, trust preferred stock (TRUPS), from the perspective of tax and financial reporting tradeoffs. GAAP does not treat TRUPS as debt even though their dividends are deductible. Thus, firms that retire outstanding debt with the proceeds from TRUPS strengthen the appearance of their balance sheet.\(^8\) EEM find that for the 44 issuers that used TRUPS to retire debt, the debt/asset ratio declined on average by 12.8 percent. EEM estimate upper and lower bounds of the costs to the firm of reducing the debt/asset ratio. The lower bound is the average actual issuance costs of the TRUPS across issuers, estimated at $10 million. The upper bound is estimated using the 15 TRUPS issuers that retired debt, rather than their outstanding traditional preferred stock. By not retiring the traditional preferred stock, the issuers chose to forgo tax benefits of $43 million, on average. Thus, firms were willing to pay between $10 and $43 million to improve their balance sheet (i.e., reduce their debt/assets ratio by 12.8 percent). We find EEM’s quantification of non-tax costs useful and encourage other researchers to attempt such estimations. By estimating the lower and upper bounds of what firms are willing to pay for favorable balance sheet treatment, EEM provides a model for estimating elusive non-tax costs. They demonstrate how taxes can provide a metric for the less quantifiable components in the efficient design of organizations. However, EEM did not model either the issuance choice or the choice of how the proceeds were used (these choices were taken as exogenous), and so their results could suffer from self-selection bias (a correlated omitted variables bias), discussed in more detail in Section 5. Nevertheless, we look forward to more papers that adopt their quantitative approach. Maydew, Schipper, and Vincent (MSV, 1999) investigate book–tax tradeoffs by examining tax-disadvantaged divestitures, i.e., taxable sales that could have \(^8\)The income statement is largely unaffected because TRUPS dividends are included among operating expenses, similar to interest expense. been avoided with a tax-free spin-off. They conclude that financial reporting incentives and cash constraints lead firms to forego a tax-free spin-off and opt for taxable asset sales. Similar to MSS (1992), in modeling the choice of divestiture, MSV must make assumptions about the effect on the firm if the alternative choice were made in order to calculate as-if calculations. MSV provide a good discussion of the issues (pp. 130–132) and recognize this problem leads to inference problems about what variables are driving the choice. In a related study, Alford and Berger (1998) find that spin-offs are more likely when the taxes associated with a sale are large; however, financial reporting considerations mitigate the importance of taxes in the divestiture decision. Finally, Bartov (1993) finds both earnings (smoothing and debt covenants) and tax incentives influence the timing of asset sales. Klassen (1997) adds that manager-owned firms are more likely to realize losses. He concludes that management ownership reduces financial reporting costs, enabling the firm to place a higher priority on tax management. 2.1.5. Regulated industries In recent years, the most active setting for evaluating book–tax tradeoffs has been banks and insurers. Regulated industries are particularly useful settings for book–tax comparisons because their mandated disclosures are more extensive than other firms are, and their production functions are relatively simple. Scholes, Wilson, and Wolfson (SWW, 1990) developed the model for research in this area when they analyzed bank investment portfolio management in regressions that pitted tax considerations against earnings considerations and another non-tax factor, regulatory capital. In the SWW setting, a bank can reduce taxable income by selling a security at a loss.\(^9\) Unfortunately, a realized loss for tax purposes also reduces net income and regulatory capital. Conversely, selling an appreciated security relaxes book and regulatory pressures, but increases taxes. Collins, Shackelford, and Wahlen (CSW, 1995b) and Beatty, Chamberlain, and Magliolo (BCM, 1995b) extend SWW to recognize that portfolio management is only one means of managing taxes, earnings, and regulatory capital. CSW note that a fully specified model would capture heterogeneity across banks, non-stationarity in tax, earnings and regulatory pressures, endogeneity among bank choices, and autocorrelation within a choice (i.e., exercising a response option now affects its future usefulness). Unfortunately, \(^9\)GAAP has changed since SWW. Financial Accounting Standard 115 now requires mark-to-market accounting for these types of securities. If they are classified as trading (available for sale) securities, then any unrealized gains and losses are included in income (equity). An interesting research question is whether this change in accounting method affects banks’ willingness to realize losses to save taxes. capturing all these dimensions in a single estimation is impossible. Thus, the researcher must choose among the dimensions. CSW relax SWW’s assumption that banks are homogeneous. They estimate bank-specific regressions, capturing bank-specific targets for each objective, rather than cross-sectional pooled means. They examine seven choice variables: security gains and losses, loan loss provisions, loan charges, and the issuance of capital notes, common stock, preferred stock, and dividends. BCM relax SWW’s assumption of independence among bank decisions. They develop and solve a cost minimization model that leads to a system of equations that they subsequently estimate, subjecting themselves to the same critique of the simultaneous equations approach as HMS (1996). BCM examine loan loss provisions, loan charge-offs, pension settlement transactions, issuances of new securities, and gains and losses from sales of both securities and physical assets. The different approaches employed by SWW, CSW, and BCM provide triangulation. All three studies find evidence that financial reporting and regulatory considerations affect bank decisions. SWW alone find taxes are an important consideration.\(^{10}\) CSW and BCM’s failures to detect substantial tax effects motivated at least one additional study. Collins, Geisler, and Shackelford (CGS, 1997a) speculate that because all banks face the same US tax rates, banking studies suffer from insufficient power to detect tax effects and so repeat the banking analysis in a setting with more cross-firm tax variation, the life insurance industry. As in banking, conditional on taxable income, all stock life insurers face constant marginal tax rates. However, conditional on taxable income, mutual life insurers face varying marginal tax rates because of an unusual equity tax imposed on mutuals. In this more powerful setting, CGS report that taxes (as well as financial reporting costs and regulatory considerations) affect investment portfolio management. Beatty and Harris (1999) examining banks, and Mikhail (1999) examining life insurers, extend this literature to investigate whether the relative importance of taxes, earnings, and regulation differs for public and private companies. Both studies report that taxes influence the decisions of private firms more than the decisions of public firms. Since private and public firms face the same tax system, these findings imply that private firms find financial accounting considerations are less important, and consequently, find optimal tax strategies are less costly. Mikhail (1999) notes that public and private firms differ for at least two reasons: (i) public firms’ compensation schemes are designed to mitigate agency \(^{10}\)SWW and BCM use the simple proxy of the existence of an NOL carryforward (and/or a tax credit carryforward) to signal low-tax status. CSW use the bank’s level of municipal bond holdings to assess its appetite for tax minimization, a proxy based on a SWW finding. costs and (ii) public firms are concerned about the stock market interpretations of reduced earnings associated with tax planning. To differentiate between these two explanations, Mikhail examines mutual life insurers. Mutuals have diffuse ownership and concurrent agency costs similar to public firms. However, unlike public firms, mutuals do not face stock market pressure. Mikhail finds that mutual insurers do not manage taxes. Because mutuals’ failure to manage taxes resembles public firms’ actions, Mikhail concludes that public firms’ incentive compensation contracts account for their difference from private companies, rather than stock market pressures. The veracity of Mikhail’s conclusion depends critically on the assumption that mutual firms face the same set of agency problems as public firms. Furthermore, while Mikhail uses a simultaneous equations approach to examine the multiple choices available to insurers to manage earnings and taxes, he does not model the initial choice of organizational choice and thus faces self-selection bias of unknown severity. Nevertheless, this paper is a good first attempt at probing deeper into the differences between private and public firms’ tax aggressiveness. We look forward to more research that attempts to differentiate between these competing explanations for observed differences between private and public firms. Finally, the SWW structure has been used to compare taxes and regulatory capital when there are no earnings implications. Adiel (1996) reports that regulatory capital considerations dominate tax concerns in the decision by property-casualty insurers to reinsure. Petroni and Shackelford (1995) find that both tax and regulatory concerns affect the organizational structure through which property-casualty insurers expand operations across states. Overall, except for the early study of banks by SWW (1990), the evidence from studies of public firms in regulated industries suggests that regulatory capital and financial reporting concerns dominate taxes (although assessment of cross-sectional variation in tax status has been generally limited to the existence of an NOL carryforward). Further, private firms appear to be more aggressive tax planners (either because they do not face capital market pressures or because they face fewer agency problems). 2.1.6. Other settings Keating and Zimmerman (2000) examine accounting for depreciable assets. In this setting book–tax tradeoffs are not expected because book depreciation is established based on the accountant’s judgment of the useful life of the assets, and since 1981 tax depreciation has been set by statute. They report that the book life of depreciable assets varies with statutory lives for tax depreciation purposes. They interpret these results as evidence that the optimal holding period for depreciable assets varies with tax deductibility. In other words, taxes affect the book depreciation, even though financial reporting does not affect tax depreciation. This result compliments Keating and Zimmerman (1999). Examining years before tax depreciation was determined by statute, the evidence is consistent with the determination of depreciation for financial accounting being an important factor in justifying tax depreciation deductions to Internal Revenue Service (IRS) auditors. In other words, financial accounting used to affect tax depreciation, but no longer does. Cloyd et al. (1996) also examine the influence of tax reporting on financial reporting choices. They hypothesize and report evidence (collected by survey) consistent with the idea that management’s choice to conform the tax and financial accounting choice (even though the financial accounting choice reduces reported book income) is positively associated with the expected tax savings. They also find that public firms are less likely to conform than private firms, consistent with other studies discussed in this review that public firms exhibit less aggressive tax behavior because they face higher non-tax costs arising from capital market pressure or agency costs. Guenther et al. (1997) provide another example of tax policy affecting financial reports. TRA 86 mandates that firms use the accrual basis. Before TRA 86, firms could use either cash basis (except for inventory) or accrual basis to calculate taxable income. Examining 66 cash method firms, Guenther et al. (1997) find that before the mandated change, cash-basis corporate taxpayers exhibited little tradeoff in their tax planning and financial reporting. However, after the mandated change, the former cash-basis firms deferred income for financial statement purposes. That is, book–tax conformity led the firms to change their accrual behavior. By deferring income, they reduced their taxable income and saved taxes, albeit at the cost of lower reported earnings. Finally, Mills (1998) tests whether the level of book income affects IRS audits. Using confidential tax return data from the Coordinated Examination Program from 1982 to 1992, she finds that proposed IRS tax adjustments are increasing in the amount that book income exceeds taxable income and that public firms are less aggressive in tax planning, which she attributes to their facing higher financial reporting costs. In our opinion, the most important implication from her results is that firms cannot costlessly reduce taxable income even if book income is not affected. Together the above studies suggest that tax rules influence firms’ financial reporting choices and that firms are concerned with book–tax differences and thus conform book numbers to tax numbers when necessary to save taxes. This might seem to conflict with prior evidence that firms leave tax benefits on the table if the action to save taxes will reduce reported profits (or have other financial reporting consequences). The results, however, are not inconsistent. The studies in this section generally do not explicitly examine cross-sectional variation in firms’ financial reporting costs. 2.2. Agency costs Evaluations of tax and non-tax factors extend beyond the financial reporting and regulatory considerations discussed in the previous section. SW (1992, Chapter 7) asserts that agency costs are another non-tax cost responsible for tax minimization not equating to effective tax planning. This section reviews papers that evaluate the effects of adverse selection and moral hazard on tax planning. Research addressing taxes and agency costs is much less well developed than the book–tax coordination literature. Because incentive problems pervade business, agency problems likely impact tax decisions. Unfortunately, the literature has largely been unable to progress beyond identifying possible areas where incentives affect tax management. We attribute the paucity of papers in this area to difficulties in quantifying incentive costs. We look forward to both theoretical and empirical advances in this area. 2.2.1. Compensation Johnson, Nabar, and Porter (JNP, 1999) investigate firm responses to 1993 legislation that disallows a deduction for non-performance-related compensation in excess of $1 million. Affected firms can preserve full deductibility for their five most heavily compensated employees by either qualifying the compensation as performance-based or deferring the compensation until a deduction can be taken. Analyzing 297 publicly held US firms with non-qualified compensation in excess of $1 million in 1992, they find that 54 percent preserve deductibility, most (78 percent) through plan qualification. JNP find that preservation increases in tax benefits (i.e., the product of the excess compensation and the firm’s marginal tax rate) and stakeholder concern about the firm’s compensation plan and decreases in contracting costs. Examining the same legislation, Balsam and Ryan (1996) confirm that agency costs affected the preservation decision. While we commend these researchers for attempting to develop proxies for agency problems, we also note that the proxies are open to arguments and interpretation and thus the conclusions based on these proxies are subject to alternative interpretations. Harris and Livingstone (1999) examine a different aspect of this legislation. They develop the hypothesis that the $1 million limit reduced the implicit contracting costs faced by firms paying less than this limit. They find that firms below the limit actually increased cash compensation above what they predicted and those further from the limit increased their compensation the most, although this inference relies critically on the model used to predict expected compensation. 2.2.2. Tax shelters Another setting where agency costs have been identified is tax shelters. Although shelters encompass various tax plans, historically they were distinguished by the deductibility of an investment at a rate that exceeds its economic depreciation (SW, 1992, p. 393). Shelters create tax savings by repackaging ownership rights among investors. Unfortunately, repackaging can lead to inefficient organizations fraught with incentive problems. For instance, before TRA 86 severely restricted their usefulness for tax avoidance, limited partnerships (LPs) enabled tax shelters to transfer deductions to limited partners facing high tax rates. Despite their tax effectiveness, these partnerships faced large transaction costs (e.g., sales commissions and investment banking fees commonly absorbed 10 percent of investments) and numerous incentive problems. For example, Wolfson (1985) details several agency costs, including resource allocation among related parties, proving up (i.e., general partner extracting private information using limited partners’ investments), payout allocation and measurement difficulties, overcompletion, and undercompletion. Space constraints prevent a detailed discussion of each incentive problem. We choose to illustrate one problem, undercompletion. Analyzing the oil and gas tax shelter industry in the 1970s, Wolfson shows that the tax-minimizing drilling structure encouraged undercompletion. From a tax perspective, the value of an LP interest is maximized if the limited partners fund the initial drilling operations, which can be immediately deducted. If the drilling succeeds, the general partner completes the extraction process, which cannot be immediately deducted. If not, the well is abandoned. The undercompletion problem arises because the general partner alone knows the status of the drilled hole. Because the general partner is responsible for all completion costs, but only receives part of the revenues, he will abandon the well unless it is profitable from his perspective, not the partnership’s perspective. For example, if the general partner finds $2 of oil after drilling and knows that it will cost $1 to complete the well, he will only complete the well if he receives more than half the revenues. Undercompletion occurs because the tax system encourages the limited partner to invest before the general partner. Wolfson provides empirical evidence that undercompletion is mitigated by drilling wells that have a low probability of being marginal (i.e., an exploratory well, where either no oil or excessive oil is expected) and by the general partner’s reputational effects. Wolfson’s empirical evidence is consistent with both tax shelter organizers and the investing public impounding these incentive problems in market prices. Similarly, Shevlin (1987) examines the decision to conduct research and development (R&D) in-house versus through a limited partnership. R&D LPs enable firms with low marginal tax rates (e.g., start-ups) to transfer (or sell) tax benefits to high marginal tax rate individuals (limited partners). LP investors can utilize the immediate deductions from R&D to reduce taxes more than lower marginal tax rate entities and subsequently realize appreciation at tax-favored long-term capital gains tax rates. In addition, in-house R&D uses traditional debt and equity funding while an R&D LP provides an opportunity for “off-balance-sheet” financing. Thus, unlike most studies, where taxes are competing with financial reporting, Shevlin examines a setting where tax and book incentives are aligned. Relying on the empirical agency literature to identify measures of financial reporting costs, Shevlin concludes that both taxes and off-balance sheet financing motivate R&D LPs. One limitation of this study is that in conducting his tests, Shevlin must compute as-if numbers, which bias him toward finding results consistent with the off-balance sheet motivation. Shevlin also acknowledges information costs between the firm and the LP investors, similar to those identified in Wolfson (1985); however, he does not incorporate them in his tests due to lack of data. Beatty et al. (1995a) extend Shevlin to evaluate jointly tax, financial reporting considerations, and information costs. They report that firms facing high information and transaction costs will sacrifice both tax and financial reporting benefits. The extant tax shelter studies examine syndicated individual structures that were severely limited by TRA 86. Recently a new form of corporate tax shelter has arisen. More complex than the earlier shelters, these corporate tax shelters typically involve flow-through entities, financial instruments, non-US entities, and aggressive interpretation of the tax law (Gergen and Schmitz, 1997; Bankman, 1998). Understanding corporate tax shelters and the extent to which they contribute to the recent decline in corporate tax receipts as a percentage of corporate profits are questions of policy and scholarly interest. Accountants are ideally positioned to unravel these complex transactions. Unfortunately, to our knowledge, data limitations have thwarted empirical attempts to analyze corporate tax shelters. We encourage accountants to think creatively about the data restrictions and initiate research in this area. Although not examining tax shelters, Guenther (1992) presents further evidence on the costs of the partnership form. Guenther compares the tax and non-tax costs associated with C corporations and master limited partnerships (MLPs). While corporations face “double” taxation (once at the firm level and again at the shareholder level via either dividends or capital gains), partnerships are flow-through entities facing taxation only at the partner level. On non-tax dimensions, shareholders and limited partners (who do not materially participate in operations) enjoy limited liability; general partners do not. Before 1981 and after 1986, corporate taxation was levied on any publicly traded entity. During the interim, MLP limited partners enjoyed entity tax exemption, limited liability, and access to public capital markets. In 1981, changes in statutory tax rates favored MLPs relative to corporations and led many to predict a surge in MLP activity. Guenther identifies non-tax costs that may have mitigated the shift from corporate form to MLPs. Besides higher record keeping costs, partnerships face higher costs arising from indemnification insurance for managers and potentially sub-optimal investment and operating decisions. These increased costs are predicted to result in lower rates of return for businesses organized as partnerships rather than corporations. Guenther finds that MLPs report lower accounting-based measures of performance than corporations, particularly earnings before interest and taxes. Shelley et al. (1998) discuss the tax and non-tax costs and benefits of restructuring a business as a publicly traded partnership (PTP) and examine the association between the capital market reaction to the announcement of the restructuring and proxies for the tax and non-tax factors. Among the purported benefits of a PTP formation are improved management (similar to that hypothesized with spin-offs and equity carve-outs), reduced information asymmetries about growth opportunities, and flow-through taxation. Off-setting these advantages are the problems mentioned in Wolfson (1985) and Guenther (1992). Shelley et al. (1998) find that announcement period returns are associated with proxies for these factors in the predicted direction. Finally, Omer et al. (2000) examine conversions from C corporations to S corporations in the natural resource industry following TRA 86. They discuss both the tax and non-tax costs and benefits similar to above. This completes our review of the literature investigating the factors that impact tax management. The papers in this area consistently document that firms do not minimize taxes, rather their decisions reflect integration of multiple factors, including taxes. The interaction of financial reporting costs and taxes is well documented, however, further documentation is needed concerning the coordination of taxes and agency costs. Less is known in both areas about the relative importance of taxes. In particular, we look forward to more studies that estimate and quantify exchange rates between taxes and other considerations. 3. Taxes and asset prices Price formation is a fundamental issue in accounting, finance, and economics. One possible price determinant is taxes. Investigations of this possibility are the second major area of current tax research in accounting. The research asks the same questions as in the tradeoff literature (Do taxes matter? If not, why not? If so, how much?). Unlike the tradeoff literature, which focuses on the factors that offset tax minimization, the pricing literature concentrates on the first and third questions, which can be reexpressed as: To what extent do prices impound taxes? In addition, unlike the tradeoff literature, where “all taxes”, i.e., the importance of considering tax-motivated price adjustments, is largely ignored, here it is the dominant theme. The multilateral contracting approach ("all parties") is also important, but consideration of non-tax factors ("all costs") is of secondary importance. Unlike the prior section, where accountants dominate the research (particularly the coordination of taxes and financial reporting), the impact of taxes on asset prices has long been an active area of research in finance and economics. Thus, it is particularly difficult to distinguish the contributions of accounting tax researchers from those of other tax researchers. Although we continue to focus primarily on the work conducted by accounting faculty and/or published in accounting journals (as stated in the introduction), we recognize the substantial contributions of our colleagues in finance and economics that go largely unmentioned in this review. Our review begins with tax research in accounting that investigates the extent to which taxes affect the structure and prices of mergers and acquisitions. Next, we review early seminal papers in finance that attempted to determine the impact of taxes on the optimal capital structure followed by recent accounting research in that area. The section concludes with a discussion of the early implicit tax studies that were motivated by SW and the current interest in whether shareholder taxes affect stock prices. The common theme in these studies is the extent to which prices impound taxes. 3.1. Mergers and acquisitions Mergers and acquisitions have been studied extensively in finance. This section reviews several tax studies by accountants that examine whether merger and acquisition structure and prices reflect corporate and investor taxes. First, however, we briefly review the relevant tax code in this complex area. Acquisitions can be tax-free (no tax to the target firm shareholders) or taxable (gains taxable and losses deductible to the target firm shareholders). In either case, the acquirer can purchase the assets or the stock of the target. In a tax-free acquisition (asset or stock), the tax basis of the target's assets, its tax attributes (NOL and tax credit carryforwards), and its earnings and profits (E&P), the source of dividends, are unaffected. A taxable asset acquisition adjusts tax bases to fair market values ("step-up") and potentially creates goodwill.\(^{11}\) If the target is liquidated following sale of its assets, E&P are eliminated. In a taxable stock acquisition, the tax basis of the target's assets carries over to the acquiring firm and thus no goodwill is \(^{11}\) As an aside, goodwill reported on the balance sheet (prepared in accordance with GAAP) is often not deductible. In financial accounting, amortizable goodwill arises if the purchase method of accounting is used regardless of whether the acquirer buys the assets or the stock of the target. Deductible goodwill for tax purposes is more restrictive. Goodwill is only deductible if the acquirer buys assets, buys stock in a free-standing company and elects to step-up the tax basis of the assets (IRC Section 338), or buys a subsidiary and the acquirer and target jointly elect asset step-up (IRC Section 338(h)(10)). booked for tax purposes. However, elections permit a taxable stock acquisition to be treated for tax purposes as if it were a taxable asset acquisition. The elections are IRC Section 338 if the target is a freestanding corporation and IRC Section 338(h)(10) if the target is a subsidiary. Unlike 338(b)(10) elections, a 338 election extinguishes target E&P. Several merger and acquisition papers address whether and to what extent the tax law governing mergers and acquisitions affects transactions. These studies address issues such as whether the benefits associated with the step-up of tax basis and deductible goodwill offset the costs of depreciation recapture and capital gains taxation of target shareholders.\(^{12}\) Although tax issues in an acquisition vary by the type of target (freestanding C corporation, subsidiary of a C corporation, S corporation or partnership), most extant research examines only acquisitions of freestanding C corporations. Examining pre-TRA 86 acquisitions, Hayn (1989) finds that target and bidder announcement period abnormal returns are associated with the tax attributes of the target firm. Specifically, in tax-free acquisitions, potential tax benefits arising from net operating loss carryforwards and available tax credits positively affect the returns of bidder and target firms. In taxable acquisitions, target shareholder capital gains taxes and potential tax benefits of a step-up in basis affect the returns of both bidder and target firms involved. Examining the structure of acquisitions over the period 1985–1988, Erickson (1998) applies an “all parties” approach, analyzing the role of tax and non-tax factors of the acquiring firm, the target firm and target firm shareholders. He finds that the acquirers with high marginal tax rates and an ability to issue debt are more likely to undertake a debt-financed taxable transaction. He finds little support that potential target shareholder capital gains tax liabilities or target firm tax and non-tax characteristics influence the acquisition structure. In further analysis, he finds that the magnitude of the potential target shareholder capital gains is small and that the corporate taxes immediately triggered by the step-up often exceed the present value of the tax benefits of stepped-up target assets. Further illustrating “all parties”, Henning et al. (2000) find that the acquirer bears target firm or shareholder taxes through higher purchase prices. This paper is not without controversy. Among other concerns, Erickson (2000) questions the validity of the sample partitions, detailing the difficulty of partitioning acquisitions as either stock or asset acquisitions and using publicly available disclosures to assess tax basis step-up. Henning et al. (2000) also \(^{12}\)A common misunderstanding is that there are always net tax benefits associated with a transaction that results in basis step-up. In fact, the immediate taxes associated with depreciation recapture and shareholder capital gains often exceed the present value of future tax benefits from increased depreciation and recapture, eliminating any tax incentive for structuring a transaction to garner step-up. report that contingent payments to the seller (which allow deferral of taxes on some of the gain) are more likely when the seller faces a high marginal tax rate. Three papers investigate 1993 legislation that permits a deduction for goodwill amortization. Henning and Shaw (2000) find that tax deductibility resulted in an increase in purchase price of goodwill generating acquisitions, consistent with acquirers sharing the tax benefits with the selling firm, and an increase in the percentage of purchase price allocated to tax deductible goodwill. Weaver (2000) addresses whether the frequency of taxable transactions giving rise to goodwill (e.g., tax basis step-up transactions) increased after the tax law change. She finds that the tax law change increased the probability of the taxable transaction being structured to obtain a step-up in basis and thus a deduction for goodwill. She adds that a step-up is more likely the higher the acquiring firm’s marginal tax rate. In contrast, Ayers et al. (2000b) report that transactions with tax basis step-up remain a constant 17 percent of the taxable transactions despite the tax change. However, a significant increase in the purchase price premium following passage of the tax law change is detected for acquisitions qualifying for goodwill amortization deductions. They estimate that higher acquisition prices enable targets to obtain 75 percent of the tax benefits arising from goodwill deductibility. To determine the role of taxes in the 338(h)(10) election, Erickson and Wang (2000) examine 200 subsidiaries that were divested in a taxable sale of stock from 1994 to 1998. As expected, they find that the election is more likely if an asset sale does not trigger too much additional tax relative to a stock sale. Consistent with the acquirer reimbursing the seller for the additional taxes, the acquisition price also is higher when the election is made. In other words, the structure of the transaction affects its price. They also report that the abnormal returns of the divesting parent are positively associated with the election’s tax benefits. Although it is improbable that acquisitions and divestitures are initiated for tax reasons, these studies indicate the transaction structure and price are influenced by acquiring firms’ tax status, target firms’ tax status (although the evidence is somewhat mixed), and the tax attributes of the target firm. The evidence in these papers is consistent with merger and acquisition prices incorporating complex tax conditions, which are typically ignored in valuation techniques, such as revenue, earnings and/or book multiples. In addition, though it is unclear whether goodwill tax deductibility increased the incidence of goodwill generating transactions, the law change appears to have increased acquisition prices in these transactions. Illustrating the “all taxes” and “all parties” themes in the framework approach, these studies document that the tax treatment affects asset (transaction) prices and influences transaction structure (asset versus stock acquisition). However, less is known about the extent to which non-tax costs (e.g., concerns over target liabilities, transaction costs such as transferring asset titles) interact with tax considerations. Finally, contrary to popular belief, it is unusual for firms to trade-off tax and accounting (book) considerations when structuring mergers and acquisitions. The tax treatment and the book treatment of acquisitions differ. In particular, tax factors rarely preclude the popular pooling of accounting interests, which enables firms to avoid goodwill amortization for book purposes. Most acquisitions of freestanding C corporations involve stock purchases (and consequently carryover of inside tax basis) and can be structured to qualify for pooling treatment. The accounting treatment for asset acquisitions and acquisition of a subsidiary’s stock is independent of the tax treatment. Both result in purchase accounting. The tax and financial accounting issues in this area are complex and often misunderstood. We look forward to research that brings these two areas together. 3.2. Capital structure 3.2.1. Early finance studies Perhaps the most developed area of tax research in finance involves capital structure choices. Capital structure has not been as dominant in tax research in accounting, but several studies have been conducted. This section reviews the development of some influential capital structure studies in finance and recent capital structure work in accounting. Among the most influential papers in business research are Modigliani and Miller (MM, 1958, 1963), two finance papers addressing capital structure. MM (1958) show that with no taxes (and perfect and complete capital markets), the value of the firm is independent of its capital structure (and its dividend policy). MM (1963) add that if interest is deductible and dividends are not deductible, then the optimal capital structure is the corner solution of all debt. Since MM (1963) is not clearly descriptive, finance researchers searched for non-tax costs of debt that prevented the corner solution. Some conclude that firms balance taxes against the possible bankruptcy costs associated with risky debt. Others assert that agency costs between debt and equity holders are increasing in debt (the static tradeoff theory involving taxes and agency costs). Myers and Majluf (1984) and Scott (1977), among others, report that leverage varies with the type of assets held by the firm. Ceteris paribus, firms with tangible assets can borrow more than firms with intangible assets because the property rights associated with tangible assets enable greater securitization (the debt securability hypothesis). Myers’ (1977) conclusion is the same, but he claims that growth prospects pose greater agency costs to lenders. Miller (1977) adds personal taxes to the leverage controversy (an “all parties” approach). Like MM, Miller assumes no market frictions or restrictions. In perhaps the most influential tax study of all, he predicts that investors with low marginal tax rates (e.g., tax-exempt investors) will hold tax-disadvantaged bonds, earning taxable interest that is currently taxed. Investors with high marginal tax rates will hold stocks that do not pay dividends and derive their equity returns through favorably taxed capital gains that are deferred until sale of the stock. Miller’s insight underlies the “all taxes” theme in the SW framework and is fundamental to the current tax research in accounting linking equity prices and taxes. Miller (1977) implies dividend clienteles, i.e., high-dividend stocks will be held by low marginal tax rate investors and vice versa. Many finance studies test for the existence of dividend clienteles (e.g., Miller and Scholes, 1978). One example in accounting is Dhaliwal et al. (1999). Consistent with Miller (1977), they document an increase in institutional ownership (a coarse measure for tax-exempt status) of firms that initiate dividend payments. DeAngelo and Masulis (1980) relax Miller’s assumption that all corporations face the top corporate tax rate. Recognizing interest expense is only one type of tax shield, DeAngelo and Masulis (1980) predict that leverage is less in firms with alternative tax shields, such as depreciation (debt substitution hypothesis). One test of their theory by accountants is Dhaliwal, Trezvant, and Wang (DTW, 1992) who test MacKie-Mason’s (1990) claim that the substitution effect increases as firms near the loss of tax shields (tax exhaustion hypothesis). After controlling for debt securability (which predicts a positive relation between leverage and fixed assets), DTW document a negative association between non-debt and debt tax shields, consistent with tax exhaustion. Examining 1981 legislation that caused changes in tax shields, Trezvant (1992) also finds support for debt substitution and tax exhaustion hypotheses. Together these studies document a link between taxes and capital structure that had been somewhat elusive. 3.2.2. Recent studies Several recent studies suggest that taxes affect capital structure. Scholes et al. (1990) report that among banks, those with net operating loss carryforwards are more likely to raise capital through equity with non-deductible dividends than through capital notes with deductible interest. Collins and Shackelford (1992) link the choice between debt and preferred stock to foreign tax credit limitations. Graham (1996a), among others, adds that a firm’s marginal tax rate is positively associated with its issuance of new debt. Engel et al. (1999) conclude that the tax benefits of leverage are large (approximately 80 percent of the estimated upper bound) in their TRUPS study. Their setting is particularly powerful because they compare securities that are nearly identical, except taxes, enabling them to exclude potentially confounding effects, such as risk, signaling, and agency costs. Their weakness is that their results may not generalize to other securities. Myers (2000) provides further evidence that taxes matter. Introducing pension plans as a capital structure option, she reports that corporate tax benefits are increasing in the percentage of pension assets allocated to bonds, potentially resolving a longstanding puzzle in finance. Her findings confirm Black (1980) and Tepper (1981) who predict firms integrate their defined benefit plans to reduce overall taxes through arbitrage (e.g., a company issues debt, invests in stock, and deducts interest while its pension invests in bonds with tax-exempt returns). 3.3. Implicit taxes 3.3.1. Early studies Besides motivating the recent capital structure studies in accounting, the seminal finance papers and SW are the foundation for the current tax research in accounting known as implicit tax or tax capitalization studies. This section reviews that literature, first looking at early studies, then transitioning to ongoing research in the area that investigates whether stock prices reflect potential dividend and capital gains taxes. Miller (1977) implies that after-tax rates of return are identical across all assets, conditional on risk and assuming no market frictions or government restrictions. SW (1992, Chapter 5) define implicit taxes as the reduced rates of returns for tax-favored investments required for this equality to hold. The classic example of implicit taxes is the lower pretax returns on municipal bonds. Because the interest earned on municipal bonds is tax-exempt, taxable investors are willing to pay more for municipal bonds than equally risky alternative investments, such as corporate bonds. Investors in the highest tax brackets will value the exclusion on municipal bond interest the most and thus a clientele of high-tax investors will hold municipal bonds. An initial implicit tax study in accounting is Shackelford’s (1991) examination of the interest rates of leveraged employee stock ownership plans (ESOP). The Tax Reform Act of 1984 excludes half the interest income on ESOP loans from income taxation. Because the benefits of interest exclusion are uncertain, most ESOP loans provide a form of tax indemnification. Specifically, two interest rates are provided in an ESOP loan agreement. The first assumes that the exclusion is available to the lender. The second assumes that the loan’s interest income is fully taxable. Because ESOP loans provide two interest rates for the same loan from the same lender to the same borrower over the same period, differing only in their tax treatment, they provide an ideal setting to test whether prices fully impound taxes. The implicit tax concept would predict that the loan’s two interest rates would provide the same after-tax return to the lender. Shackelford finds after-tax rates are similar, but not equal. Approximately 75 percent of the tax benefits from the exclusion are passed through to the borrower as lower interest rates. This finding is analogous to findings in Ayers et al. (2000b) and Henning and Shaw (2000) that target shareholders extract part of the benefits of goodwill deductibility from acquirers through higher acquisition prices. Differentially taxed investments attract different clienteles. Consistent with this prediction, Shackelford finds that high tax rate lenders dominate the ESOP loan market. He concludes that ESOP interest rates reflect the tax treatment accorded their lenders and that the lenders are the financial capital suppliers who can most benefit from the favorable tax treatment. Other early implicit tax studies include Stickney et al. (1983), Berger (1993), and Guenther (1994b). Stickney et al. (1983) estimate that in 1981 General Electric Credit Corporation paid roughly 70 cents on the dollar for tax benefits related to safe harbor leasing. Berger adds that the tax benefits accorded research and development affect its asset price. Guenther detects a small response in the interest rates of Treasury securities to changes in the taxation of individuals. More recently, Erickson and Wang (1999) document that by redeeming Seagram’s shares at a below-market rate in 1995, DuPont retained 40 percent of Seagram’s tax savings. On the other hand, Engel et al. (1999) show that taxes had little effect on asset prices in their TRUPS study. ### 3.3.2. Marginal investor Shackelford’s (1991) results imply that the marginal provider of ESOP capital has a marginal tax rate that approaches the statutory tax rate. As a result, ESOP interest rates clear at a level that reflects the relatively high tax rate of the marginal investor. In other words, the research question could be restated as, “Who is the marginal investor?” If Shackelford had found no difference between ESOP interest rates, he could not have rejected the implicit tax concept. Instead the evidence would have been consistent with (a) the marginal provider of ESOP capital being a tax-exempt organization, facing a zero marginal tax, or (b) market frictions or government restrictions impeding price adjustments. Because neither frictions nor restrictions seem likely in Shackelford’s (1991) setting, his paper can be recast as an estimation of the marginal tax rate of the marginal investor. In this light, the differences in ESOP interest rates can be interpreted as providing evidence that the marginal lender is in a high-tax bracket. Erickson and Maydew (EM, 1998) elaborate on the role of the marginal investor. They show that the existence and magnitude of implicit taxes are largely empirical questions. Building on SW, they stress that the theoretical prediction that prices adjust to reflect taxes is of limited predictive value because of diverse differentially taxed assets and investors, market imperfections, and government restrictions on tax arbitrage (SW, 1992, Chapter 6). With two differentially taxed assets (taxable corporate bonds and tax-free municipal bonds) and two differentially taxed investors (taxable individuals and tax-exempts), it is impossible to predict the implicit tax rate that equates the two asset values. If the marginal investor is an individual, the yield on a tax-free municipal bond should be reduced by the individual’s tax rate. If the marginal investor is a tax-exempt, the pretax yield on a corporate bond should equal the pretax yield on a tax-free municipal bond. EM report that a 1995 proposed decrease in the dividends-received deduction (thus increasing the dividend taxes paid by corporate investors) resulted in a price decline for preferred stock, but not common stock. They conclude that the marginal investor for preferred stock is a corporation that enjoys the dividends-received deduction while the marginal investor for common stock is not a corporation affected by the dividends-received deduction. Alternatively stated, the implicit taxes associated with the corporate dividends-received deduction are greater for preferred stock than for common stock. 3.4. Equity prices and investor taxes 3.4.1. Motivation One of the most active areas in tax research currently is whether investor taxes (dividends and capital gains taxes) affect share prices, or, alternatively stated, whether the marginal equity investor pays taxes. Tax research in accounting contributes to this literature with Dhaliwal and Trezevant (1993), Landsman and Shackelford (1995), Erickson (1998), Erickson and Maydew (1998), Guenther and Willenborg (1999), Harris and Kemsley (1999), Ayers et al. (2000a), Blouin et al. (2000a–e), Collins et al. (2000), Collins and Kemsley (2000), Gentry et al. (2000), Guenther (2000), Harris et al. (2001), Lang and Shackelford (2000), Seida and Wempe (2000), and Lang et al. (2001), among others. The implicit null hypothesis throughout this literature is that the marginal investor does not pay taxes.\(^{13}\) This null is no straw man. Miller and Scholes (1978), among others, conclude that investor taxes do not affect stock prices. Unlike the presumption that municipal bonds impound investor’s tax exemption, theoretical and empirical studies in accounting, finance, and economics implicitly assume that prices are set by pensions, not-for-profit organizations, or other shareholders that do not pay investor taxes. For example, in accounting, leading theoretical work (e.g., Ohlson, 1995) implicitly assumes that the marginal equity investor is a tax-exempt organization. Similarly, by generally ignoring investor-level taxes in their valuations, popular \(^{13}\)Few papers in this field explicitly state the null hypothesis of tax irrelevance. We believe that this reliance on an implicit null hypothesis has contributed to some misunderstanding about the purpose of these studies. MBA courses, such as financial statement analysis, and current valuation texts (e.g., Palepu et al., 1996) implicitly assume that the marginal equity investor is a tax-exempt organization. Tax capitalization studies challenge the widely held assumption that investor taxes are value-irrelevant. If the marginal investor pays taxes (e.g., individuals, mutual funds held in personal account, corporations, trusts, estates, etc.), then an important determinant of stock prices may be missing from many analytical and empirical models. Moreover, any measurement error associated with ignoring investor-level taxes, particularly capital gains taxes, may have increased dramatically in recent years because of the long-running US bull market. The implications of overturning investor-tax irrelevance are non-trivial, including: - share prices impound the expected after-tax returns to investors; - share prices vary with changes in the expected tax treatment of dividends and capital gains; - share prices vary with changes in the tax status of its investors; - information affects share prices differently depending on investors’ tax attributes (e.g., whether investors are taxable or tax-exempt and whether they have appreciated or depreciated positions in the stock). In the following sections, we review several recent studies and ongoing research that estimate relations between equity values and investor-level taxes, attempting to assess the importance of shareholder taxes. Readers should approach these studies skeptically. Many are unpublished, and few have undergone close scrutiny and numerous replications. However, we believe that these studies potentially have important implications for accounting, finance, and economics. 3.4.2. Dividend tax capitalization Early tax studies in economics and finance focus on whether dividend taxes affect share prices. The evidence is mixed and remains controversial. These studies come under various names, including Tobin q studies, new view vs. traditional view of dividends, and ex-dividend date studies. The dividend tax capitalization studies produced at least three schools of thought (see Harris and Kemsley (1999), for additional discussion). The traditional view of dividends assumes the non-tax benefits of dividends (e.g., reduced agency costs) offset the tax cost of dividends. As noted above, the irrelevance view (e.g., Miller and Scholes, 1978) assumes the marginal equity investor is a tax-exempt entity. The “new view” of dividends is less intuitive. It claims that share prices fully capitalize the future taxes associated with dividends. This implies that growth is funded first with internal resources. Thus, firms are not expected to pay dividends and issue new shares simultaneously. Furthermore, the cost of capital does not depend on the “permanent” component of the dividend tax rate. Mature firms can pay dividends anytime at no incremental tax cost because shareholders have already bid down share prices to reflect the inevitable dividend taxes, assuming constant tax rates and inevitable distribution of all earnings and profits as dividends.\footnote{In practice, dividends are not inevitable. That is, E\&P, the source of dividend taxation, do not have to be distributed to shareholders in a form that triggers dividends. Besides dividends, E\&P are reduced by share repurchases, liquidations following taxable asset acquisitions, and 338 elections following stock acquisitions (Lang and Shackelford, 2000). The evidence is conflicting about the extent to which acquisitions eliminate E\&P through non-dividend means. In their analysis of 83 going-private management buyouts from 1982 to 1986, Schipper and Smith (1991) report that 11 buyouts were share redemptions and 28 other acquirers announced that they would step-up the tax basis of the acquired company. Conversely, Erickson finds little evidence of E\&P elimination at acquisition among publicly traded companies. Analyzing 340 acquisitions from 1985 to 1988 involving publicly traded acquirers and targets, Erickson reports only seven acquirers disclosed their intention to step-up the tax basis of the target’s assets. On the other hand, Bagwell and Shoven (1989) report that 1987 redemptions totaled $53 billion, up 824 percent from 1977. They show that from 1985 to 1987 total repurchases were 60 percent of total dividends. Auerbach and Hassett (2000) counter that redemptions have become less important. They report that by the mid-1990s, only 5–10 percent of companies repurchased shares. Regarding taxable asset acquisitions followed by corporate liquidations, Henning et al. (2000) identify 49 acquisitions of the assets of an entire company from 1990 to 1994. Presumably targets were subsequently liquidated, eliminating E\&P. They also report that 338 elections followed 154 stock acquisitions during the same period.} A series of recent accounting studies (Harris and Kemsley, 1999; Harris et al., 2001; Collins and Kemsley, 2000) investigate dividend tax capitalization using Ohlson’s (1995) residual-income valuation model. They concur that the marginal equity investor is an individual. All three papers infer that equity is discounted for dividend taxes because the coefficient on retained earnings (their proxy for future dividends) in their valuation model is less than the coefficient on other book value. Collins and Kemsley (CK, 2000) extend the original model to incorporate the capital gains taxes arising from secondary trading. Examining 68,283 observations from 1975 to 1997, they regress firm-level stock prices on stockholders’ equity, earnings, and dividends and interactions with dividend and capital gains tax rates. Consistent with investors treating dividends as an inevitable distribution of E\&P, the magnitudes of CK’s estimated coefficients imply that share prices fully capitalize dividend taxes at the top individual statutory federal tax rate. They also estimate that prices further capitalize approximately 60 percent of capital gains taxes at the top individual long-term capital gains tax rate. Both dividend and capital gains results imply that individuals are the marginal equity investors. CK conclude that capital gains tax capitalization in stock prices is in addition to, rather than in lieu of, dividend tax capitalization. This produces the counterintuitive conclusion that paying dividends provides an incremental tax benefit for shareholders, rather than the commonly assumed incremental tax penalty associated with dividends. Dividend payments benefit shareholders because they reduce the value of the firm and thus avoid “redundant” capital gains taxes when investors sell their stock. CK’s findings are controversial for at least three reasons. First, most companies do not pay dividends, and among those that do pay, dividend yields are low.\(^{15}\) Thus, for CK’s findings to hold, investors must price companies, such as Microsoft, which has never paid any dividends, as if they will eventually distribute all of their earnings and profits as taxable dividends to investors facing the current top personal rate. Given the changes in dividend tax rates over the last few decades, if dividends are not anticipated until far in the future, it seems unlikely that market prices would be sensitive to current dividend tax rates. Second, CK’s results conflict with dividend tax clienteles. Dhaliwal et al. (1999) findings imply that if non-dividend-paying companies (e.g., Microsoft) begin paying dividends, individuals will sell their shares to investors who can receive dividends at a lower cost, such as tax-exempt entities. The new shareholders would be taxed on the dividends at less than the highest personal income tax rate. The selling shareholders would pay tax on the appreciation in the company at the capital gains tax rates. In other words, dividend tax clienteles imply that Microsoft’s stock might impound capital gains taxes at the highest individual rate, but not dividend taxes. We look forward to a study that reconciles dividend tax capitalization and dividend tax clienteles. Third, there is little variation in the maximum statutory capital gains tax rates. While the highest dividend tax rates ranged from 31 to 70 percent from 1975 to 1997, capital gains tax rates were 28 percent in all years, except 1975–1978, when they were 35 percent, and 1982–1986, when they were 20 percent. Thus, the capital gains tax results are driven solely by differences between the study’s first four years and the five years following the 1981 rate reduction and rely critically on controls for other sources of variation between these two periods. Furthermore, in years of legislative change in the rates (i.e., 1978, 1981, 1986, and 1997), investors presumably impounded the capital gains tax rate before it became effective. Finally, to the extent prices are set by the expected capital gains tax rate, rather than the current statutory rate, it becomes difficult to identify the relevant rate in several non-change years that were filled with speculation about possible changes in the capital gains tax rate. For these reasons, we find these results implausible and will need additional tests employing various methodologies to accept the implications of these \(^{15}\) Fama and French (1999) report only 20.7 percent of US firms paid cash dividends in 1998. Lang and Shackelford (2000) report that the dividend-paying firms among the nation’s largest 2000 companies had a mean dividend yield of 2.8 percent in 1997. studies. Nevertheless, we readily acknowledge that this current set of dividend tax capitalization papers in accounting have renewed interest in dividend tax capitalization, and, at a minimum, caused scholars to revisit the longstanding dividend puzzle. If the results hold under further scrutiny, it will be no overstatement to term these studies revolutionary. 3.4.3. Capital gains tax capitalization studies of equilibrium prices Compared with dividend tax capitalization, capital gains tax capitalization is a relatively unexplored area. Capital gains taxation differs from dividend taxation in at least three critical areas. First, shareholders, not firms, generally determine when capital gains taxes are generated. In fact, capital gains taxes can be avoided completely by holding shares until death. Second, unlike dividends, which are paid quarterly by some firms, every stock price movement creates capital gains and losses for all taxable shareholders. Third, the applicable capital gains tax rate has historically been less than the dividend tax rate for property held for an extended period. For example, under current law, individuals who hold investments for more than one year face a maximum 20 percent capital gains tax rate on gains. Gains on investments held for shorter periods (and dividends) are taxed at the ordinary tax rate, which caps at 39.6 percent. Empirical papers in this area generally exploit changes in tax policy or economic conditions to increase the power of the tests to detect a relation between stock prices and capital gains taxes. In brief, these studies generally find equity values impound the capital gains taxes that shareholders anticipate paying when they sell, a finding that conflicts with prior conclusions that shareholder taxes are irrelevant for share prices (e.g., Miller and Scholes, 1978, 1982). Our discussion of the extant capital gains tax capitalization literature is split into two sections. This section reviews equilibrium pricing studies, which test whether stock prices impound the tax-favored long-term capital gains tax rate (currently at 20 percent). The next section discusses price pressure studies, which test whether trading volume and share prices respond temporarily to shifts in the capital gains tax. Equilibrium pricing studies address issues similar to the dividend tax capitalization papers reviewed above. The intuition is as follows: When an individual considers incorporation, he values the business venture after all taxes, including any investor-level taxes. If he is the sole shareholder, he ignores dividend taxes because he will not pay himself tax-disfavored dividends. Instead, he anticipates capital gains taxes at liquidation or sale of the business. If shareholders of widely held, public companies value the returns on their stock investments similarly, i.e., after investor-level capital gains taxes, then equity prices should reflect capital gains tax capitalization, rather than dividend tax capitalization. For those companies that pay dividends, the calculus is slightly altered, but current dividend payout ratios are so small, as discussed above, that investors likely anticipate the bulk of their returns will be subject to investor-level capital gains taxes, not dividend taxes. Because most firms pay no dividends and few firms pay large dividends, capital gains tax capitalization arguably dominates dividend tax capitalization if the marginal equity investor pays taxes. Example of “equilibrium pricing” studies include Erickson (1998), Guenther and Willenborg (1999), and Lang and Shackelford (2000), among others. CK jointly evaluate dividend and long-term capital gains tax capitalization. Despite its intuitive appeal, researchers have been slow to consider the possibility of long-term capital gains tax capitalization for at least two reasons. First, as discussed above, the evidence from the dividend studies is mixed. Since dividends are more predictable than sales, it seems reasonable that documenting capital gains tax capitalization may be a difficult task. Second, researchers have generally assumed (perhaps erroneously) that the necessary conditions do not hold for long-term capital gains to affect stock prices. The conditions include the marginal investor being a compliant taxable individual who intends to sell in a taxable transaction after holding the stock more than one year, the current long-term holding period (Shackelford, 2000). If his investment horizon is shorter, all gains and losses will be subject to short-term rates and thus the long-term rate will not be capitalized. Because all conditions must hold simultaneously for share prices to vary with the long-term capital gains tax rate, tax scholars historically assumed that long-term capital gains taxes had little effect on equilibrium pricing. Current studies challenge this assumption by designing tests of hypotheses that follow from the conditions holding. Lang and Shackelford (LS, 2000) model an initial structure for considering how capital gains taxes might affect equilibrium pricing. They show that secondary trading and share repurchases accelerate the recognition of taxable income or losses that otherwise would be deferred until firm liquidation. They predict that if the necessary conditions hold, then capitalization of the capital gains tax in a firm’s share price will be greater to the extent a firm’s stock is traded in the secondary market and/or repurchased by the company, two events that trigger capital gains taxes. Thus, it becomes an empirical issue whether market behaviors are consistent with these predictions. Employing a conventional event study methodology, LS report that the raw returns of non-dividend-paying firms were 6.8 percentage points greater than the raw returns of other firms during the May 1997 week when Congress and the White House agreed to reduce the long-term capital gains tax rate.\(^{16}\) They \(^{16}\)There is some controversy over the permanence of the price shift. LS find no evidence that the price change is temporary. As detailed below, Guenther (2000), however, attributes part of the price shift to temporary price pressure, the subject of discussion in the next section. interpret these findings as evidence that investors discriminated among companies based on the probability that shareholder returns would be affected by the new capital gains tax rates. Guenther and Willenborg (1999) find that IPO prices increased following implementation of a special 50 percent capital gains tax exclusion for small offerings. Initial public offerings are popular for both equilibrium pricing and price pressure tests (e.g., Reese, 1998; Blouin et al., 2000a) in the capital gains tax capitalization literature because individuals hold disproportionate shares of these companies and the IPO provides a start date for computing long-term capital gains holding periods. Lang et al. (2001) find some evidence of capitalization of the capital gains taxes levied on corporate shareholders. They analyze stock price responses to the 2000 elimination of the German capital gains tax on crossholdings, i.e., German stock held by other German companies. They find that investee stock prices increase upon elimination of the crossholdings tax. However, the price increase is limited to non-strategic holdings (less than 20 percent) by the largest German banks and insurers in manufacturing firms. Moreover, the investing banks and insurers enjoyed even larger price surges than the investee corporations. These studies provide preliminary evidence consistent with capital gains tax capitalization. At worst, these findings conflict sufficiently with prior assumptions (that share prices do not impound potential capital gains taxes) that they demand further attention. At best, they may be seminal studies, documenting that the many necessary conditions simultaneously hold (at least in certain situations) and providing evidence that the marginal investor is an individual discounting equity values for an anticipated long-term capital gains tax. 3.4.4. Price pressure arising from capital gains taxes The price pressure studies in the capital gains tax capitalization literature build on the findings in the equilibrium pricing papers, using a structure developed in finance for non-tax price pressure (e.g., Harris and Gurel (1986), Shleifer (1986), and Lynch and Mendenhall (1997), among many others). These studies generally investigate short windows and test whether capital gains tax incentives affect trading volume and, if so, whether the volume surge is large enough to move prices. For example, as noted above, Guenther (2000) examines the same legislative change as LS. He fails to detect the normal price movements for ex-dividend date firms (price decline before ex-dividend, price rebound the following day) during the 1997 long-term capital gains tax rate reduction. He attributes this departure to an unwillingness by individual investors (who had held shares for more than one year) to sell until the lower long-term capital gains tax rate took effect. This seller’s strike temporarily boosted prices, implying that some of the LS price response may be temporary. Unfortunately, the generalizability of Guenther’s findings is hampered by the study’s focus on a small set of ex-dividend date firms. Landsman and Shackelford (1995) examine a setting where shareholders demand compensation to accelerate long-term capital gains taxes. Examining the confidential records of individual shareholders, they report that when RJR Nabisco shareholders were forced to liquidate their shares in the firm’s leveraged buyout, stock prices rose to compensate shareholders for long-term capital gains taxes, which they had intended to defer or avoid fully by holding shares till death. Shareholders facing smaller capital gains taxes generally sold for less than shareholders facing larger capital gains taxes did. A particularly active area in the price pressure literature tests whether buyers compensate sellers to sell earlier and pay tax-disfavored short-term capital gains taxes (or conversely, whether sellers forgo compensation on sales of depreciated securities to ensure tax-favored short-term capital losses.) Shackelford and Verrecchia (1999) model the potential price pressure showing that, if individuals purchase stock assuming the long-term capital gains tax rate will apply to their gains, then they will demand compensation through higher prices to sell before long-term qualification (and pay the higher short-term capital gains tax). In other words, a seller’s strike will force prices to increase temporarily. Conversely, holders of depreciated property prefer short-term capital loss treatment to long-term capital loss treatment. Therefore, they will flood the market with shares immediately preceding long-term qualification, increasing volume and driving prices down. Empirical papers in this area analyze trading volume around the long-term qualification date and test whether the volume reactions are sufficient to move prices. In other words, the empirical tests assess whether the market is liquid enough to absorb a seller’s strike with appreciated property or sell-offs with depreciated property. Several studies provide empirical support for capital gains tax-motivated price pressure around the qualification date. For example, analyzing several years of data, Reese (1998) reports that trading volume increases and prices fall for appreciated firms when their initial public shareholders qualify for long-term capital gains tax treatment, consistent with a sell-off when lower long-term capital gains tax rates first apply. Also analyzing initial public shareholders first qualifying for long-term capital gains tax rates, Blouin et al. (2000a) examine volume and price responses to the 1998 Congressional committee report that shortened the long-term capital gains holding period. They find that trading volume increased for appreciated shares. Moreover, on the announcement date, volume surged enough that share prices fell and then rebounded the next day, consistent with price pressure created by differences in long- and short-term rates. Similarly, Poterba and Weisbrenner (2001) revisit the January effect and show that from 1970 to 1978, the prices of equities that had declined during the capital gains holding period (six months at that time) rebounded following year-end. This is consistent with temporary price reversal following a tax-induced, year-end sell-off intended to ensure short-term capital loss treatment. Blouin, Raedy, and Shackelford (BRS, 2000b,c) attempt to determine whether these price pressures can be detected under more general conditions (i.e., when tax considerations are less prominent). They note that most prior capital gains tax studies are conducted under conditions that bias in favor of finding that taxes matter, e.g., changes in tax policy, transactions where taxes are important considerations (e.g., mergers and acquisitions), companies held disproportionately by individuals (e.g., IPOs), and periods when tax planning is prevalent (e.g., year-end). They attempt to determine whether the findings in support of price pressure reflect exceptions to the rule (i.e., only occur under special tax conditions) or whether they illustrate a more general pricing role for capital gains taxes. Another distinction of these papers is that they explicitly state the trading strategy that they presume individual investors use. Specifically, their empirical tests presume investors trade in the fashion described by Constantinides (1984), i.e., investors sell losses before the qualification date and sell gains immediately subsequent to the qualification date, reestablishing the option to realize future capital losses at the short-term rate. BRS (2000c) examine the change in stock returns when the Standard & Poor’s Corporation announces the addition of a firm to its 500 stock index. They link price increases to capital gains taxes, concluding that index funds compensate individual investors holding appreciated stock to entice them to sell before long-term capital gains qualification. This compensation provides temporary price pressure around the index announcement. BRS (2000b) examine an even more improbable setting for capital gains tax effects, price responses to quarterly earnings announcements (probably the most investigated setting in accounting research). They find trading volume temporarily increased when individual investors faced incremental taxes (tax savings) created by selling appreciated (depreciated) shares before they qualify for long-term treatment. Furthermore, they find that the surge in volume is sufficient to cause shares to trade temporarily at higher (lower) prices, consistent with shareholders receiving (forgoing) compensation for unanticipated capital gains (losses). In other words, it appears that around earnings releases, the equity markets are insufficiently liquid to counter the tax-driven trading without moving prices. We find this result particularly surprising and anticipate extensions that will test the robustness of this finding. To summarize, unlike prior studies that focus on price reactions in settings where shareholder taxes are unusually salient, the BRS papers find the imprint of capital gains taxes in more general settings, devoid of any obvious biases toward finding taxes matter. To find that personal capital gains affect security trading in these settings is surprising and suggests that capital gains tax effects are pervasive and matter more than previously thought. A weakness of many capitalization studies (Landsman and Shackelford (1995), notwithstanding) is their inability to test directly the impact of shareholder taxes on stock prices. Better data are needed to construct direct tests. For example, BRS (2000c) could be nicely extended with detailed records of selling and buying shareholders (and their tax status) around the announcement that a firm is joining the S&P 500. Instead of inferring from capital markets tests (as they do) that mutual funds are compensating taxable individuals for their capital gains taxes, such data could enable direct tests of questions, such as: Are the shareholders selling to mutual funds, when firms join the S&P 500 index, taxable individuals holding appreciated stock for less than one year? Unfortunately, these ideal data tend to be confidential and difficult to obtain; however, we look forward to creative research that employs these richer data. 3.4.5. Summary In summary, an active area in tax research in accounting addresses whether prices impound taxes. These studies trace their lineage to seminal finance papers in capital structure. Besides capital structure, accountants have explored debt securities and mergers and acquisitions. In general, these studies have combined extensive institutional knowledge with sound econometric analysis to contribute to our understanding of the importance of taxes in corporate finance. More recently, a flurry of papers question whether equity prices reflect investor-level taxes, both dividend taxes and capital gains taxes. Exploiting accountants’ comparative advantage of understanding the nuances of the tax law, these papers challenge the assumption of shareholder tax irrelevance. Conducted in a variety of settings, most provide empirical evidence that dividends and/or capital gains taxes affect share prices. Although many studies are unpublished and important questions remain, we infer from this increasingly large body of empirical evidence that at least in some settings, prices are set by taxable individual investors and that investor tax irrelevance (while providing analytical simplification) is less descriptive than previously thought. In short, the contributions and caveats of dividend tax and capital gains tax capitalization studies are similar. Both produce surprising results and have the potential to overturn some longstanding positions (e.g., shareholder tax irrelevance). However, additional research is warranted to assess the robustness of these studies and their implications for share prices. 4. Multijurisdictional research Another area in which complex tax provisions serve as barriers to entry for many researchers is the taxation of multijurisdictional commerce. Multinational and multistate research has been among the most active areas of tax research in accounting in recent years. However, the motivation for the work in this area differs somewhat from the tradeoff and the capitalization literatures. Tax researchers have repeatedly applied the SW framework to multinational settings for at least four reasons.\(^{17}\) First, from a pragmatic empirical perspective, transjurisdictional settings enhance a tax researcher’s power because multiple jurisdictions introduce additional tax rate and base variation. The fundamental questions (Do taxes matter? If not, why not? If so, how much?), which are difficult to test in a single jurisdiction with constant tax rates and bases, can become tractable in transjurisdictional settings with variable rates and bases. Second, from a theoretical perspective, the impact of jurisdictional variation in tax burdens on commerce is an inherently interesting scholarly question that relates closely to cost accounting. Markets ignore political borders; taxes vary with them. For example, telecommunications link consumers from different governments. Which government has jurisdiction over which part of a communication? If a New Yorker calls a Texan and the call is routed through satellites and other telecommunications equipment across the country, where are profits earned, i.e., which state has tax jurisdiction over the taxable income arising from the call? How are revenues and expenses allocated across multiple states? Accountants have a comparative advantage in addressing these questions of profit and cost allocation. An example of one particularly important current issue is Internet taxes (Goolsbee, 2000). Third, from a policy perspective, as business has expanded in recent years, policymakers and tax practitioners have demanded documentation and understanding in the previously arcane multinational and multistate areas. Finally, recent construction of international databases that provide computer readable data from publicly available financial disclosures (e.g., Global Vantage) has significantly lowered the costs of some types of international tax research. 4.1. Multinational As an initial multinational study, Collins and Shackelford (CS, 1992) exploit another contributing factor to the growing interest in multinational studies, the \(^{17}\) Economists, particularly those with access to confidential US tax returns, also have been active in the international tax research area (see Hines (1997) for a review). Accountants, however, dominate the international income shifting field. shift by US multinationals from domestic tax planning to global tax planning following the 1986 reduction in US corporate tax rates and concurrent limitation of foreign tax credits. Applying both “all parties” and “all taxes”, CS show that the tax considerations of a US multinational, its lenders, and its shareholders must be jointly evaluated to determine the least costly source of financial capital. TRA 86 strengthened the provisions that require firms to allocate domestic interest expense against foreign source income. Because foreign source income is the base on which foreign tax credits are computed, interest allocation reduces foreign tax credits. More specifically, foreign tax credits shrink when an American company opts for domestic debt financing. Moreover, because the interest is allocated according to the percentage of the firm’s operations outside the US, the shrinkage increases with the firm’s foreign operations. Thus, the benefits of interest deductions for a US company are diminishing in the firm’s foreign operations. Consequently, after TRA 86, equity financing became less costly, relative to debt financing, for profitable US multinationals with extensive foreign activities. To operationalize the multilateral perspective, CS hold the suppliers of debt and equity capital indifferent after-tax, recognizing that corporations are taxed advantageously on dividend income. They then compute the level of foreign operations that would leave firms indifferent between debt and equity. They show that if a firm has 22 percent of its operations abroad, it is indifferent between debt and equity. If their foreign operations are greater, then equity is a less costly form of capital. Consistent with this prediction, CS find evidence consistent with taxpaying companies with large international operations (e.g., Coca-Cola and Exxon) substituting adjustable-rate preferred stock for commercial paper. CS argue that both products are short-term sources of capital, differing largely on their tax treatment; however, they do not incorporate any other differences (e.g., agency costs) in their tests. The preference for equity by companies facing high marginal tax rates illustrates the counterintuitive conclusions that are common when the multilateral perspective is employed. Newberry (1998) extends Collins and Shackelford to examine incremental financing choices (see Section 5.4 for discussion of the advantage of studying incremental or new issues). She finds that the FTC limitations influenced firms to decrease their domestic debt by substituting both common and preferred stock (the latter predominantly by large firms, consistent with CS, who mostly evaluated large firms). Besides substituting equity for debt, US multinational firms could respond by locating more of their debt in foreign subsidiaries. Smith (1997) and Newberry and Dhaliwal (2000) document such a response. Newberry and Dhaliwal examine international bond issuances and find that the bond issuance is more likely to be placed in a foreign subsidiary than in the US parent if the US firm has a US NOL carryforward and if the FTC limit is binding. They add that bonds are more likely to be placed in foreign subsidiaries located in high-tax countries than in moderate tax rate countries. Newberry and Dhaliwal illustrate the income shifting studies—the largest area of international tax research in accounting. Two initial income shifting studies in accounting were Harris (1993) and Klassen et al. (1993). Both examine publicly available data of a cross section of US multinationals. They attempt to determine whether patterns in reported income and taxes are consistent with incentives to shift taxable income to the US following TRA 86. Their findings are mixed. In his discussion of these papers, Shackelford (1993) recognizes their originality but concludes that more powerful tests are needed to determine whether multinationals shift income to minimize their global tax burdens. More recent income shifting studies reflect at least three advancements in the research technology. First, at least some of the empirical analyses adopt a theoretical structure that enables them to move beyond the descriptive nature of the earlier studies and develop more powerful tests. For example, Harris (1993) and Jacob (1996) recognize that multinationals vary in their ability to shift income. Olhoff (1999) formally incorporates economies of scale to predict that international tax avoidance is increasing in the size of the multinational. Second, several studies access confidential tax return and other proprietary information to construct more powerful tests. For example, Collins et al. (1995a, 1997b), and Collins and Shackelford (1997) examine transactions within global enterprises that would be unobservable without their access to actual US corporate tax returns. Third, alternative tests are being conducted. For example, Collins et al. (1998) use capital markets methodology to test whether reported earnings reflect income shifting. These technological improvements have raised the bar for quality tax research in the international area. For example, Mills and Newberry (2000) combine confidential IRS data on a select group of the largest foreign-controlled US companies with publicly available financial information on foreign corporations to conduct detailed firm-level tests of income shifting and the country location of debt. They find that the amount of tax paid to the US by a foreign corporation varies with numerous factors, including how the US tax rate compares with other countries’ rates, the financial performance and reliance on intangible assets by the global enterprise, and the financial performance and leverage of its US operations. Despite these advances, Mills and Newberry (2000) remains largely documentation, not unlike the prior studies. We look forward to studies that use the technological advances to move beyond documentation. Besides income shifting, several papers examine the role of taxes in the location of production facilities. Kemsley (1998) reports results consistent with firms locating production in response to foreign tax credit incentives and US and foreign country tax rates. Wilson (1993) conducts a field-based study while Single (1999) uses the responses of tax executives to a case study to analyze the relative importance of taxes in the location decision. Both approaches offer ways for researchers to supplement the use of archival data and provide insights not available from analysis of archival data. Wilson suggests that the tax costs of locating in a country are negatively associated with the costs arising from non-tax factors such as the quality of the workforce, infrastructure, and political stability, i.e., tax incentives offset the other costs arising from locating in that country. Single’s results indicate that tax holidays (no foreign taxes are due for the first $n$ years of the firm’s operations) are positive incentives, but rank relatively low in a list of 29 factors. Finally, consistent with firms coordinating their inter-affiliate transfers to mitigate worldwide taxes, Collins and Shackelford (1997) find that dividend, royalties, and sometimes interest payments, but not management fees, between foreign affiliates of US multinationals are negatively associated with the net tax levied on cross-border transfers. Although data limitations prevent explicit testing, they acknowledge that agency costs likely mitigate more extensive worldwide tax minimization. These costs include impaired performance evaluation, resulting from profit reallocation within the organization, and erosion of the firm’s non-tax relations with both home and host governments. 4.2. Multistate Although heterogeneity across tax systems is a major attraction of international settings, other forms of cross-country variation (e.g., currency, legal system, financial markets, and economic development) potentially introduce correlated omitted variables and measurement error that affect inferences. In an attempt to retain the variation in tax systems while controlling for many sources of heterogeneity, researchers have recently turned to multistate tax research, another area of increased tax planning. Besides reducing measurement error, multistate research is also attractive because states have unique provisions that permit alternative tests of whether taxes affect business activity. For example, unlike countries that rely on separate accounting to determine the tax base, states and provinces allocate total firm income (from all states) across states according to a predetermined formula that varies across states but relies on the percentage of total sales, property, and payroll in a particular state. Several recent studies address these unique features of state tax provisions. Paralleling many international tax shifting papers, Klassen and Shackelford (1998) find an inverse relation between the income reported in US states and Canadian provinces and their corporate income tax rates. They also link shipping locations to state provisions concerning the taxation of goods shipped out-of-state (so-called “throwback” rules). Goolsbee and Maydew (2000) estimate that double-weighting the sales apportionment factor increases manufacturing employment in the state by 1.1 percent, albeit by imposing negative externalities on other states. Lightner (1999) finds that low corporate tax rates spur employment development more than favorable apportionment formulae or throwback rules. Gupta and Mills (1999) report high returns to firms that invest in state tax avoidance. A series of papers address issues unique to property-casualty insurers, an industry where state taxes are unusually burdensome. These papers conclude that state premium taxes affect insurers’ cross-state expansion (Petroni and Shackelford, 1995) and their statutory filings with regulators (Petroni and Shackelford, 1999). Ke et al. (2000) add that less insurance is purchased in states that tax insurers more heavily, consistent with insurance prices capitalizing the effects of state taxes. In conclusion, multijurisdictional research likely will continue as a major focus of tax research in accounting, if for no other reason than its variation in tax rates and bases provides a powerful setting for testing tax effects. However, documenting that taxes matter likely will be insufficient for publication in the leading journals. The proliferation of multinational (and increasingly multi-state) studies has significantly raised the hurdle for incremental contribution in this area. As a mature specialization in tax research in accounting, international tax may not have the growth potential of some areas, but the quality of its published research likely will be high. 5. Methodological issues The remainder of the paper addresses six methodological issues: estimating marginal tax rates, self-selection bias, specifying tradeoff models, changes vs. levels specifications, implicit taxes in tax burden studies, and using confidential data. Although these issues are not unique to tax research, each is prominent in the extant literature. To date, tax research has not been noted for many methodological advancements. Perhaps the issues raised in this section will initiate evaluation of the appropriate tools for undertaking empirical tax research in accounting. 5.1. Estimating marginal tax rates Most tax research in accounting requires a marginal tax rate estimate or proxy. In addition, many studies outside the tax area need marginal tax rate measures to control for possible tax effects. A major contribution of tax research in accounting to non-tax research has been the development and assessment of various marginal tax rate estimates. SW define the marginal tax rate as the change in the present value of the cash flow paid to (or recovered from) the tax authorities as a result of earning one extra dollar of taxable income in the current tax period. This definition incorporates both the asymmetry and multiperiod nature of US corporate tax law. Taxable income is taxed in the current period. Taxable losses are carried back (currently two years) and forward (currently 20 years) to offset taxable income arising in other years. Thus, managers make decisions using tax rates that reflect the firm’s past tax status and anticipated future tax status. To illustrate, suppose a corporate taxpayer has generated more tax deductions than taxable income in the past. The result is $20 of NOL carryforwards, which can shelter future taxable income. Suppose investment and financing plans are fixed and the firm anticipates annual taxable income of $8 beginning one year from today. The current and expected statutory corporate tax rate is 40 percent. Without NOLs, an extra dollar of taxable income would trigger an immediate tax of 40 cents, leaving a marginal tax rate of 40 percent. With $20 of NOLs, the firm faces no immediate tax liability on an extra dollar of income. However, its marginal tax rate is not zero. Instead, $8 per year of taxable income means the firm will pay taxes in three years. Therefore, an extra dollar of taxable income today triggers a tax payment of 40 cents in three years. Discounting after-tax cash flow at 8 percent per year leaves a present value of the incremental tax of 31.75 cents \((40/1.08^3)\) or a corporate marginal tax rate of 31.75 percent. More formally stated for this scenario \[ \text{mtr} = \frac{(\$1 \times \text{str}_s)}{(1 + r)^s}, \] where mtr denotes the marginal tax rate, str$_s$ denotes the expected statutory tax rate in period $s$, the period in which the firm is eventually taxed on the extra dollar of income earned in the current period, and $r$ is the firm’s after-tax discount rate. Therefore, if the current statutory rate is scheduled to fall in one year to 25 percent, then the current marginal tax rate for the NOL firm would be 19.84 percent (or \(0.25/1.08^3\)), even though the rate for a firm without NOLs would remain 40 percent. Analogously, if the statutory rate is expected to increase to 55 percent in one year, then the current marginal tax rate for the NOL firm would be 43.66 percent (or \(0.55/1.08^3\)). In other words, if tax rates are rising, the current marginal tax rate of NOL firms could exceed that of non-NOL firms current paying taxes at the full statutory rate! Marginal tax rate proxies in the extant literature include a categorical variable for the existence of an NOL carryforward, a categorical variable for the sign of (estimated) taxable income, the effective or average tax rate, and the top statutory tax rate. Each measure has weaknesses. Shevlin (1990) summarizes the limitations of the NOL dummy variable and the dummy variable for the sign of taxable income.\textsuperscript{18} Because it is an average tax rate, the effective tax rate is a flawed measure for assessing the role of taxes in incremental decisions. The top statutory tax rate ignores cross-sectional variation in firms’ marginal tax rates.\textsuperscript{19} If the study includes NOL carryforward firms, precision is added to the marginal tax rate estimate by incorporating the recovery of future taxes through utilization of the NOL. Forecasts of future taxable income are needed to estimate the number of years before the NOL is exhausted. Manzon (1994) forecasts future taxable income with a simple valuation model: $$V = E/r,$$ where $V$ is the market value of the firm’s common equity, $E$ is the expected future earnings or taxable income, and $r$ is the after-tax discount rate. Rearranging $$E = Vr.$$ Now solving for $s$, the number of periods before the NOL carryforward ends, finds $$s = \frac{\text{NOL}}{E}.$$ To illustrate, suppose a firm has an NOL carryforward of $6, a market value of equity of $15.625, and $r$ equals 8 percent. These data imply an expected annual future taxable income of $1.25, implying $s$ equals five years. If the statutory tax rate is expected to remain at 35 percent over the foreseeable future and taxes are paid at the end of the year, the marginal tax rate equals 25.7 percent. \textsuperscript{18} Two studies examine the accuracy of the NOL data reported by Compustat. Kinney and Swanson (1993) compare the Compustat data with the firms’ financial statement footnote disclosures. They report that when a categorical variable is created from Compustat data item #52 indicating the existence of an NOL carryforward, 10 percent are coded as zero when a carryforward exists, and 2 percent are coded as one when a carryforward is not mentioned in the footnotes. Mills et al. (2000) construct tax NOLs from confidential tax return data and find 9 percent of their sample report a Compustat NOL when the tax return reports no NOL (often when the firm reports a foreign NOL in their footnotes). They also find that 3 percent of their sample report no Compustat NOL when there is a US NOL (often relatively small NOLs). Mills et al. (2000) provide some classification rules to reduce measurement error in Compustat reported NOLs. \textsuperscript{19} See Graham (1996a, b) for evidence of cross-sectional variation in estimated marginal tax rates. His findings are consistent with several financial accounting papers that document an increase in the frequency and number of firms reporting losses. Moreover, marginal tax rates can vary among firms currently paying tax at the top statutory rate if taxable losses are anticipated in the next two years (under current law). The loss can be carried back and taxes paid in the current year recovered. In that case, the marginal tax rate is the current period statutory tax rate minus the present value of the tax rate in the loss period. If a currently profitable firm does not expect to incur taxable losses within the next two years, the statutory tax rate likely is a reasonable approximation for its marginal tax rate. Shevlin (1987, 1990) and Graham (1996b) develop more complex simulations that forecast future taxable income based on the firm’s historical taxable income series. Shevlin incorporates the NOL carryback and carryforward rules, and Graham extends the approach to include tax credits and the corporate alternative minimum tax. The interested reader should refer to the original papers because the simulations are too complex to review fully in this paper. They require several assumptions to implement, and estimates vary with the assumptions. Nevertheless, simulated rates have become increasingly popular (e.g., Keating and Zimmerman, 2000; Myers, 2000). Graham’s (1996b) evaluation of marginal tax rate proxies makes a compelling case in their support and simulated rates for a large sample of publicly listed firms can be easily accessed at Graham’s website, http://www.duke.edu/~jgraham/ under the “tax rates” option. Do these proxies actually capture the marginal tax rates that managers use to make decisions? Unfortunately, as with discretionary accruals, this question is difficult to answer because “true” marginal tax rates are unobservable. Using confidential tax return data, Plesko (1999) attempts an evaluation of the marginal tax rate proxies. Unfortunately, Plesko’s data are limited to one period, preventing him from incorporating multiperiod effects of the asymmetric treatment of gains and losses. He calculates each firm’s taxable income from tax return data and then uses the statutory tax rate for that level of taxable income as the firm’s “true” marginal tax rate. He assigns a marginal tax rate of zero if the firm reports taxable losses even though the loss may be utilized in a future year or carried back to a prior year. Plesko concludes that two binary variables capture most of the variation in the marginal tax rates. However, this conclusion, due to the single period nature of Plesko’s calculation, is too premature to guide estimations of corporate marginal tax rates.\(^{20}\) Access to a time series of firm tax data would strengthen Plesko’s analysis by enabling incorporation of NOL carrybacks and carryforwards. However, such data enhancements are of limited value if future taxable income realizations are a function of current and past actions taken by the firm in response to its tax status (Shevlin, 1990, note 8). If the endogeneity of future period taxable income realizations to current marginal tax rates is of second order magnitude, then future taxable income realizations could be used to calculate a present value measure of marginal tax rates. Regardless, the relevant marginal tax rate is the one used by managers and a worthwhile endeavor would be to document (possibly by field study) how firms incorporate their tax status into their decisions. Determining whether managers use a simple binary measure based on the sign of taxable income \(^{20}\)See Shevlin (1999) for further (critical) discussion of Plesko’s paper. or more complex measures as assumed by the simulation measures would be an important finding. 5.2. Self-selection bias Tax studies commonly estimate models taking the following form: \[ y_i = \beta' X_i + \delta I_i + \varepsilon_i, \] (1) where \( I \) is a categorical variable indicating group membership. For example, in their tests of tax, earnings, and regulatory management, Beatty and Harris (1999), and Mikhail (1999) compare two groups, publicly- and privately-held firms. In another setting, Henning and Shaw (2000) investigate the extent to which 1993 legislation, which provided deductibility for goodwill amortization, affected the allocation of acquisition purchase prices across assets. Among various tests, they compare allocations between two groups, targets that stepped-up tax basis and targets that did not. Examining the same event, Ayers et al. (2000b) compare acquisition premiums between two groups, firms likely qualifying for deductible goodwill amortization and those not likely qualifying. Each of these papers uses ordinary least squares to estimate regression models that are similar in structure to Eq. (1). Consequently, each faces a self-selection problem that may result in biased estimates of \( \delta \). Interested readers are referred to Maddala (1991) and Greene (1990). Intuitively, two conditions must hold for ordinary least squares to produce biased estimates of \( \delta \). One, non-random selection determines group membership (i.e., firms self-select into groups). Two, group determinants are correlated with the \( X \) variables. If both conditions hold, one solution is to include the inverse Mills ratio as an additional regressor to correct this omitted correlated variables problem.\(^{21}\) Practically, if results are unaltered by inclusion of the Mills ratio, erroneous inferences from self-selection bias can be ruled out. For example, Guenther et al. (1997) recognize the potential self-selection bias in their study, report that their OLS results are similar to their two-stage results, and dismiss self-selection as a material problem in their setting. Including the Mills ratio effectively transforms the estimation into two regressions. The first stage estimates a model explaining the group membership. The second stage estimates the original relation between group membership and the dependent variable with the inclusion of the inverse Mills ratio. Thus, studies examining the choice of group membership (e.g., ISO disqualification, organization form, domestic vs. foreign location, LIFO inventory choice, and acquisition or divestiture structure) are unaffected by self-selection problems because these studies are modeling the choice itself. \(^{21}\) It is not clear that this solution is implementable if the group membership variable is to be interacted with other explanatory variables. Self-selection becomes a problem when the researcher is interested in the effects of the selection on some other decision variable, i.e., when group membership is an explanatory variable rather than a dependent variable or only one group is examined. For example, in the latter case, Hunt et al. (1996), by examining the earnings management behavior of LIFO firms as a function of taxes and financial reporting factors ignore the self-selection issue: firms that select LIFO likely do so because of the opportunities it offers to reduce taxes and manage reported earnings.\(^{22}\) A second alternative to the self-selection problem is offered by Himmelberg et al. (1999) and implemented in an accounting tax paper by Ke (2000). Modeling the group choice in a first-stage regression assumes that observable variables are available (some of which are not already in the second-stage regression).\(^{23}\) To the extent variables are not observable or available (e.g., the group choice and dependent variable in the second stage are jointly determined by firm-specific unobservable characteristics of the firm), a firm fixed effects model can control for (or mitigate) the effects of any self-selection biases. Finally, self-selection can be a problem even if firms find themselves grouped by a seemingly exogenous change, e.g., a change in tax policy. For example, as discussed above, several studies attempt to assess whether firms managed book accruals to reduce taxes triggered by the TRA 86’s book–tax adjustment (BIA) to the alternative minimum tax (e.g., Gramlich, 1991; Boynton et al., 1992; Dhaliwal and Wang, 1992; Manzon, 1992). Suppose a sample is drawn including both treatment firms (those likely affected by the provision) and control firms (those not likely affected by the provision). A measure of accrual management is then regressed on a variable that segregates treatment and control firms. Does this structure constitute a potential self-selection bias? On the surface, it appears that the firms did not self-select. However, the BIA was targeted at firms reporting high book income to shareholders and low taxable income to the tax authorities. To the extent determinants of these reporting choices correlate with other determinants of accrual management \(^{22}\)Note that the argument is that LIFO choice is likely correlated with some of the explanatory variables examined by Hunt et al. (1996) and thus a check for self-selection biases would require inclusion of the Mills ratio from a first-stage selection model. The argument is not that the LIFO choice is correlated with the other dependent variables examined in Hunt et al. (1996). If that were the case, the LIFO choice would be endogenous (that is, dependent on the other dependent variables) and LIFO choice would need to be modeled as part of a simultaneous equations system, which differs from the self-selection issue, discussed here. \(^{23}\)This comment also suggests that the validity (or strength of the control offered) of the inverse Mills approach depends upon how well the researcher models the group choice in the first-stage regression. To the extent the researcher does a poor job, the more likely it is that including the inverse Mills ratio in the second stage will not change results leading the researcher to falsely conclude that self-selection does not appear to be an important issue in their setting. This comment applies to all instrumental variable approaches. (the dependent variable), the AMT studies suffer from self-selection. The implication of this example is that researchers should carefully consider the process through which groups are produced. In summary, the seriousness of self-selection is unresolved. At a minimum, researchers should consider a robustness check that compares single-stage OLS results to two-stage tests including the Mills ratio as an additional regressor. 5.3. Specifying tradeoff models Many studies reviewed in the tax and non-tax tradeoff section can be characterized as using the following design (e.g., Scholes et al., 1990; Matsunaga et al., 1992): \[ Y = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \varepsilon, \] (2) where \( Y \) denotes the choice under study, for discussion purposes here assumed to be a categorical variable, 0, 1, with firms undertaking the choice coded 1. \( X_1 \) is a variable measuring a firm’s tax benefits/costs, again assumed to be 0, 1 with 0 (1) being low (high) tax firms, and \( X_2 \) is a variable measuring non-tax costs/benefits, again coded 0, 1 with 0 (1) being firms with low (high) non-tax costs. Suppose the non-tax costs are financial reporting considerations. A significant coefficient on \( \beta_1 \) (\( \beta_2 \)) provides evidence that taxes (financial reporting) affect the choice. However, significant coefficients on both variables also have been interpreted as evidence that firms tradeoff taxes and financial reporting in the choice. We question this stronger interpretation. In a regression model such as Eq. (2), the correct interpretation of a significant positive coefficient on \( X_1 \) is that after controlling for the effects of the other variables in the model, the firm’s tax status has a positive effect on the choice. A similar interpretation attaches to the other coefficient(s). In other words, the regression coefficient captures the incremental effect of the firm’s tax status on the firm’s choice. If the researcher wishes to make the stronger interpretation that firms tradeoff taxes with other non-tax costs and benefits, then a different model specification is necessary. Tradeoffs should mean that the effect of taxes on the firm’s choice depends on the level of the non-tax costs, or conversely, the effect of non-tax costs on the firm’s choice depends on the firm’s marginal tax rate. To capture this effect, we suggest a model specification, which includes an interaction between tax and non-tax effects. For example \[ Y = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \beta_3 (X_1 \times X_2) + \varepsilon. \] (3) A significant coefficient on the interaction term is consistent with firms considering the level of the other variable and hence trading off tax and non-tax costs. For purposes of developing the discussion, we present the following $2 \times 2$. Assume the choice reduces taxable income, saving taxes, but also reduces reported accounting earnings.\footnote{The reasoning is unaltered if the choice (i) increases reported income, but also increases tax costs for high-tax firms, or (ii) more generally, potentially increases or reduces taxable and accounting income, e.g., sale of securities to realize gains and losses, disposal of assets, and LIFO inventory management.} \begin{center} \begin{tabular}{c c c c} & $X_1$ & & \\ & 0 (low tax) & 1 (high tax) & \\ \hline $X_2$ & 0 (low FRC) & a & b \\ & 1 (high FRC) & c & d \\ \end{tabular} \end{center} We discuss each cell in turn. In cell a, the firm faces both a low tax rate and low financial reporting costs. Thus, there is no tax incentive to undertake the transaction (and no real incentive to do the transaction for financial reporting), and $Y$ is predicted to equal zero. In cell b, the firm is high-tax and faces a low financial reporting cost. The firm is expected to undertake the income reducing action so $Y = 1$. In cell c, the firm is low-tax and faces high financial reporting costs. There is little incentive to undertake the action; thus $Y = 0$. Finally, in cell d, the firm is high-tax but faces high financial reporting costs. Here the firm must weigh both taxes and financial reporting costs. The probability of the firm taking the action lies between 0 and 1. This analysis shows that the extent to which taxes matter depends on the financial reporting costs faced by the firm. In this simple example, all high-tax firms have an incentive to reduce income and save taxes. However, only those firms in cell d tradeoff taxes and financial reporting costs. They likely engage in less tax reducing behavior than high-tax firms in cell b that are less encumbered by financial reporting costs. Finally, studies that include an interactive term on taxes for ownership structure (e.g., Klassen’s (1997) insider ownership measure) essentially are estimating the interaction model described above. For example, the categorical variable for ownership structure may denote firms less concerned with financial reporting costs because they are manager-controlled or closely held and thus face lower reporting costs. Other papers, e.g., Beatty and Harris (1999) and Mikhail (1999), include an indicator variable for ownership interacted on each of the tax and non-tax costs to examine whether their effects vary with firm ownership (public versus private). In summary, the appropriate model specification in a tax and non-tax tradeoff study depends on the research question and any resulting inferences should be based on the model estimated. 5.4. Changes vs. levels In their investigation of the relation between a firm’s marginal tax rate and the issuance of new debt, both Mackie-Mason (1990) and Graham (1996a) illustrate how a “changes” (rather than “levels”) approach allows a more powerful test of debt and taxes. Examining the issuance of new debt rather than total outstanding debt avoids two problems that plague many “levels” studies. First, a firm’s capital structure (as well as other accounts) reflects past decisions that were based on expectations that may not have been fulfilled because of unexpected outcomes (e.g., a change in product markets, competition, the economy, or tax policy). Thus, even if decisions are tax-motivated when undertaken, later they may appear contrary to predicted tax responses. Because it is costly to restructure capital (e.g., debt–equity swaps), cross-section “levels” studies may erroneously conclude that taxes do not affect capital structure decisions. In other words, recontracting costs inhibit firms from immediately restructuring their economic balance sheets when their tax status unexpectedly changes. Thus, cross-sectional tests of debt levels can fail to find a tax effect when it actually exists. The second “levels” problem that the changes approach avoids is the downward bias on the regression coefficient that occurs when researchers compare ex-post choices and ex-post marginal tax rates when the choice affects the rate. For example, theory predicts that high-tax firms will use debt to lower their tax bills. By increasing debt, however, firms increase the interest deduction and lower their marginal tax rate. Thus, in equilibrium, all firms may appear to face similar marginal tax rates. If so, tests can fail to detect a relation between ex-post debt levels and ex-post marginal tax rates when, in fact, high-tax firms increased their debt levels to garner the tax shield offered by debt. An alternative to the changes specification is to use marginal tax rates (and, where necessary, other variables) estimated on a but-for approach (also referred to as pre or as-if measures). An example of this approach is Graham et al. (1998) in which they show that debt levels and the usual after-financing tax rates are negatively correlated but that debt levels and before-financing tax rates (but-for marginal tax rates) are positively associated as predicted by theory.\(^{25}\) 5.5. Tax burdens and implicit taxes The theoretical and empirical evidence suggests that implicit taxes are pervasive. Besides the discussions above, a few sources of implicit taxes include \(^{25}\) As noted in Section 2, in calculating but-for or as-if variables, the researcher has to be careful not to induce biases towards the alternative hypothesis. See discussions of this issue in Shevlin (1987) and Maydew et al. (1999). rapid depreciation, tax credits, expensing of certain investments (e.g., advertising and research and development), and special tax treatment for industries, such as oil and gas, timber, and real estate. If implicit taxes are as pervasive as they appear, it is important that they be incorporated in measures of the total tax burden levied on the economy. Unfortunately, to our knowledge, studies that assess corporate tax burdens (e.g., Zimmerman, 1983; Porcano, 1986; Wilkie and Limberg, 1990, 1993; Wang, 1991; Kern and Morris, 1992; Shevlin and Porter, 1992; Collins and Shackelford, 1995, 2000; Gupta and Newberry, 1997) and individual tax burdens (e.g., Seetharaman and Iyer, 1995; Dunbar, 1996; Iyer and Seetharaman, 2000) ignore implicit taxes. These important tax policy studies typically compute effective (or average) tax rates as a measure of taxes payable (current tax expense or total tax expense) divided by a measure of firm earnings.\footnote{See Omer et al. (1991) and Callihan (1994) for reviews of the effective tax rate literature and methodology. Plesko (1999) attempts an evaluation of ETR studies using actual tax return data. He argues and attempts to document that financial statement based ETRs are measured with error. We agree that financial statement based ETRs contain measurement error when compared to a benchmark of tax return tax burdens. However, depending on the research question, financial statement based ETRs are the appropriate measure to study and tax-based ETRs then contain measurement error. See Shevlin (1999) for further discussion of this issue.} Tax burden studies usually acknowledge that implicit taxes are ignored because they are difficult to measure. Unfortunately, if implicit taxes are material (or alternatively stated, prices are set by taxpaying investors), omitting them from distribution analyses potentially leads to erroneous inferences and flawed policy recommendations. Advances in the technology for estimating implicit taxes would be an important advancement for the tax burden literature. To illustrate the shortcoming in the current studies, suppose A invests $10 of capital in fully taxable investments, earning a pretax rate of return of 10 percent per annum. B invests $10 of capital in a tax-exempt activity (e.g., municipal bonds), earning a pretax rate of return of 7 percent per annum. If the statutory tax rate is a flat 30 percent on all taxable income, both firms earn $7 after-tax, but A has an effective tax rate of 30 percent and B has an effective tax rate of 0 percent using current tax burden methodology. If instead implicit taxes could be incorporated in the analysis, the average tax rate for both firms would be 30 percent. A’s 30 percent would be all explicit. B’s 30 percent would be all implicit. Unfortunately, measuring implicit taxes is rarely as simple as in the above example. Callihan and White (1999) attempt to derive an estimate of implicit taxes, using publicly available financial statement data. Briefly, they estimate the implicit taxes as \[(PTI - CTE)/(1 - str) - PTI,\] where PTI is the firm’s pretax income, CTE is the current tax expense, and str is the top statutory tax rate. The first term represents an estimate of the pretax return the firm would have earned had it invested in fully taxable assets while the second term represents the pretax return on actual investments. We can define $CTE = (PTI - X)str$ where $X$ is the difference between taxable and accounting income arising from temporary and permanent differences and tax credits. Substituting, implicit taxes equal $Xstr/(1 - str)$. Thus, implicit taxes are estimated as the amount of tax preferences times the top statutory tax rate grossed up to a pretax value or equivalently stated, the pretax value of the tax savings arising from the use of tax preferences. When deflated by shareholders equity, this measure is equivalent to the tax subsidy measure derived by Wilkie and Limberg (1993). This measure can also be restated as $(str - etr)/(1 - etr)$ where etr is the firm’s effective tax rate (total tax expense/pretax book income) indicating that the measure is really only capturing variations in firms’ effective tax rates and thus is not directly estimating firms’ implicit taxes. Callihan and White’s approach may be a start toward developing useful estimates of implicit taxes at the firm level but obviously more work is needed. Similarly, Mazur et al. (1986) may aid researchers in assessing individuals’ tax burdens. 5.6. Confidential data A distinguishing feature of several international tax papers (e.g., Collins et al., 1997b; Collins and Shackelford, 1997; Mills and Newberry, 2000) and some papers outside the international area (e.g., Boynton et al., 1992; Plesko, 1999; Landsman et al., 2001) is the use of data that are not publicly available, such as confidential tax returns. Access to confidential tax returns typically arises from employment (e.g., Plesko, 1999), consulting (e.g., Mills and Newberry, 2000), or special arrangements with the IRS (e.g., Collins et al., 1995a). Access to confidential firm data typically is gained through personal contacts with firm officials (e.g., Landsman and Shackelford, 1995) or financial consultants (e.g., Myers, 2000) or by solicitation through mailings (e.g., Shackelford, 1991; Phillips, 1999; Yetman, 2000). Because the scientific method relies on the ability of researchers to replicate studies, should the research community rely on knowledge gained from using confidential data? Our opinion (note one co-author has used confidential \footnote{Although replications per se are not commonly published in leading accounting journals, we would argue that replication occurs nonetheless. First, it is not unusual for doctoral students as part of their coursework to replicate prior research. Inability to complete such replications attracts the attention of students and their advisors and can lead to publications. Second, many publications are extensions that began by replicating the prior findings. Third, it is not unusual for lower tier journals to publish replications of papers in leading journals.} data extensively) is that such research should not only be published, but also encouraged. There are at least four reasons for this view. First, even studies using confidential data can be replicated. Researchers within the Treasury can replicate studies using confidential tax return data at relatively low cost. Other researchers can follow the lead of the initial researchers and obtain access to confidential data. (To do so for replication alone, however, likely is a poor use of a valuable resource.) In many ways research based on confidential data is similar to much accounting research that relies on costly, privately (researcher) collected data (field research, experimental economics, judgment and decision-making research). Second, many research questions that are investigated with confidential data could be addressed using publicly available data albeit imperfectly. Access to confidential data often is motivated by an attempt to reduce measurement error in a key variable. For example, several papers use publicly available financial statement data to examine the effects of the book income adjustment for the alternative minimum tax. Boynton et al. (1992) triangulate those studies using tax return data. Third, occasionally confidential data enable researchers to address questions that could not be addressed with publicly available data. For example, Collins and Shackelford (1997) examine cash transfers between commonly owned foreign subsidiaries of US companies. This study could not be undertaken with publicly available data, such as financial statements. Fourth, in the same way that Fama (1980) argues for ex-post settling up in the managerial labor market as a disciplining device, reputation effects in academe dampen abuse with confidential data. Despite many reasons for using confidential data, the experience of one co-author is that confidential data can be “fools’ gold”. Access can be slow, e.g., gaining permission through the IRS can take months or even years. Confidential data may not be computer readable. Sample sizes may be small and sampling non-random. Even if accessible, no data (even tax returns and private firm information) are complete and capable of transforming uninteresting research questions. Thus, before investing in costly confidential data, we would encourage researchers to ensure that the confidential data will significantly enhance the quality of the research. 6. Closing remarks This paper provides a historical record of the scholarly journey that has led to the current state of empirical accounting research on taxation. This review reflects the struggles of empirical tax research in accounting to apply an initial structure. We are encouraged by the rapid progress of the field in the last few years and look forward to further research enhancing our understanding of the role of taxes in organizations. As the area enters its adolescence, we envision five developments. First, the better research in the future will move beyond simply documenting that taxes matter. It will more precisely quantify the extent to which taxes matter and the impediments to tax minimization. Second, additional theoretical guidance is needed to move the literature beyond SW and longstanding finance papers. Notwithstanding some theoretical work in transfer pricing (e.g., Halperin and Srinidhi (1987, 1996), Harris and Sansing (1998), and Sansing (1999, 2000), among others), the theoretical tax work in accounting generally addresses issues of secondary interest to tax accounting empiricists, e.g., tax compliance. Without more structure, the literature covered in this paper will stagnate at the documentation stage. By developing theory or importing theories from related fields, hypothesis testing of competing theories will enable the field to mature. Guenther and Sansing (2000) illustrate how modeling provides insights and guides the development of hypotheses and empirical research. They examine the firm valuation effects of the accounting for deferred taxes. Their model, in contrast to conventional wisdom, shows that the timing of expected deferred tax reversals should not affect the value of the firm. This result has both implications for empirical research examining how the market values deferred tax assets and liabilities and for standard setters who propose requiring firms to report a present value estimate of the deferred tax assets and liabilities (i.e., a function of the timing of reversals). Theoretical structure also is improving the capitalization literature (e.g., Shackelford and Verrecchia, 1999; Collins and Kemsley, 2000; Lang and Shackelford, 2000). Similarly, Olhoff (1999) formally introduces economics of scale to international tax avoidance. Research is needed that incorporates taxes and other organizational choices, such as vertical integration, out-sourcing, and decentralization. Third, the methodological concerns raised in this paper imply that more rigorous econometrics may be needed. To date, this area has imported its methodology from other areas, particularly financial accounting. Researchers should consider whether econometric procedures that have not been needed in financial accounting would advance the tax field. Fourth, we anticipate tax research in accounting to better incorporate knowledge from other areas, particularly finance and public economics. Because SW caused a paradigm shift among tax accountants, we have a tendency to ignore the long history of tax analysis in finance and economics. For example, the relation between stock prices and investor-level taxes has been investigated extensively in both economics and finance. Accountants should be careful to avoid redundancy. Fifth and closely related to the last development, tax research in accounting should increasingly impact the tax research being undertaken in finance and economics as the common interest across disciplines is better recognized. Recent contributions by accountants into the capitalization of capital gains taxes in equity prices may be a harbinger of future cross-pollination that benefits accounting and related fields. We encourage accountants to engage in joint research with tax researchers in economics and finance (e.g., Shackelford and Slemrod (1998), Goolsbee and Maydew (2000), and Harris et al. (2001) among others). We close with a few thoughts about potentially new areas of research. Because advances in knowledge are inherently unpredictable and we do not pretend to have perfect foresight, these might be viewed as questions that we would like answered. First, strong links have been developed between financial accounting and taxes. Many studies reviewed here involve research jointly conducted by tax and financial accounting scholars. Some accounting scholars (including one co-author of this paper) are members of both camps. Surprisingly, similar bridges have not developed between tax and managerial accounting. The empirical focus of most current tax research may partially account for its affinity with empirical financial accounting. However, arguably tax, as an internal function of the organization, fits more naturally with the questions that interest managerial accounting than with the questions from financial accounting. Income shifting among commonly owned firms, such as observed in international tax, compensation, and the effects of incentive costs are a few topics closely related to managerial accounting. For example, transfer prices for taxes are derived from cost allocations. A recent example of potential links between managerial and tax is Phillips (1999) who examines the link between management compensation schemes and aggressive tax planning. One likely outcome of a managerial accounting emphasis would be enhanced interest by accountants in non-income taxes, such as sales, use, Internet, property, and compensation taxes. We look forward to more papers that span tax and managerial accounting research. Second, a potentially understudied topic is accounting for income taxes, which neither tax research nor financial accounting research has closely evaluated. In recent years, a few papers have begun to analyze accounting for income taxes (e.g., Givoly and Hayn, 1992; Gupta, 1995; Amir et al., 1997; Ayers, 1998; Miller and Skinner, 1998; Sansing, 1998; Collins et al., 2000). However, none, to our knowledge, directly addresses the extent to which accounting for income taxes affects income tax planning. Anecdotal evidence suggests publicly traded firms manage book effective rates. Collaboration between tax and financial accounting researchers could address how firms coordinate reducing tax payments and managing book effective tax rates.\textsuperscript{28} Finally, little is known about the potential cross-sectional differences in the willingness of firms to avoid taxes. Extant studies show that financial reporting costs and agency considerations constrain tax aggressiveness. Anecdotal evidence, however, suggests that firms (like individuals) vary in their tax aggressiveness. Questions that we find interesting include: What are the determinants of tax aggressiveness? Are growth firms, decentralized firms, and firms led by non-financial CEOs less tax aggressive? Why do some firms compensate on pretax measures and others use after-tax measures? One determinant that has attracted attention is the extent to which managers or other insiders control the firm. Scholes et al. (1992) suggest that closely held firms face lower financial reporting costs. Klassen (1997), among others, conjectures that higher managerial ownership lowers market pressures to report higher income thus lowering the financial reporting costs and enhancing tax aggressiveness. The evidence, however, is mixed. Matsunaga et al. (1992) find no evidence that manager ownership influenced disqualifying dispositions of incentive stock options. Neither Gramlich (1991) nor Guenther (1994a) finds manager-owned firms more willing to shift income around TRA 86. On the other hand, Klassen (1997) finds that managerial ownership matters and concludes that high-tax manager-owned firms were more willing to save taxes than other firms were. An extension of this research is the comparison of private versus public firms in the banking and insurance industries that we discussed earlier. We find these types of analysis interesting and useful in better understanding the organizational factors that affect tax aggressiveness. We look forward to future studies that will further explain the determinants of tax planning. \textbf{References} Adiel, R., 1996. Reinsurance and the management of regulatory ratios and taxes in the property-casualty insurance industry. Journal of Accounting and Economics 22 (1–3), 207–240. \textsuperscript{28}Recall that effective tax planning is not the equivalent of minimizing taxes, which is often the implied objective when the researcher studies the financial statement effective tax rate. Effective tax planning has the objective of maximizing the after-tax rate of return while tax minimization has the objective of lowering taxes. Further, by studying the effective tax rate (defined as total tax expense as a percent of pretax book income), the researcher is only capturing the extent to which the firm avails itself of permanent differences and tax credits in its tax planning activities. Accelerating deductions and delaying income recognition to the extent they give rise to temporary differences has no effect on the effective tax rate, yet these income shifting actions can increase the after-tax rate of return by saving taxes. However, if the researcher is interested in determining how aggressive the firm pursues tax minimization, then current tax expense (as a proxy for taxes paid) as a percent of pretax book income may be a reasonable measure. Alford, A., Berger, P., 1998. The role of taxes, financial reporting, and other market imperfections in structuring divisive reorganizations. Working paper, Wharton School, University of Pennsylvania, Philadelphia, PA. Amir, E., Kirschenheiter, M., Willard, K., 1997. The valuation of deferred taxes. Contemporary Accounting Research 14 (4), 597–622. Auerbach, A., Hassett, K., 2000. On the marginal source of investment funds. National Bureau of Economic Research working paper 7821. Austin, J., Gaver, J., Gaver, K., 1998. The choice of incentive stock options vs. nonqualified options: a marginal tax rate perspective. Journal of the American Taxation Association 20, 1–21. Ayers, B., 1998. Deferred tax accounting under SFAS No. 109: an empirical investigation of its incremental value-relevance relative to APB No. 11. The Accounting Review 73 (2), 195–212. Ayers, B., Cloyd, B., Robinson, J., 2000a. Capitalization of shareholder taxes in stock prices: evidence from the Revenue Reconciliation Act of 1993. Working paper, University of Georgia, Athens, GA. Ayers, B., Lefanowicz, C., Robinson, J., 2000b. The effects of goodwill tax deductions on the market for corporate acquisitions. Journal of the American Taxation Association 22 (Suppl.), 34–50. Ball, R., 1972. Changes in accounting techniques and stock prices. Journal of Accounting Research 10, Supplement 1–38. Bagwell, L., Shoven, J., 1989. Cash distribution to shareholders. Journal of Economic Perspectives 3, 129–140. Balsam, S., Ryan, D., 1996. Response to tax law changes involving the deductibility of executive compensation: a model explaining corporate behavior. Journal of the American Taxation Association 18 (Suppl.), 1–12. Balsam, S., Halperin, R., Mozes, H., 1997. Tax costs and nontax benefits: the case of incentive stock options. Journal of the American Taxation Association 19, 19–37. Bankman, J., 1998. The new market in corporate tax shelters. Working paper, Stanford University, Stanford, CA. Bartov, E., 1993. The timing of asset sales and earnings manipulation. Accounting Review 68 (4), 840–855. Beatty, A., Harris, D., 1999. The effects of taxes, agency costs and information asymmetry on earnings management: a comparison of public and private firms. Review of Accounting Studies 4, 299–326. Beatty, A., Berger, P., Magliolo, J., 1995a. Motives for forming research & development financing organizations. Journal of Accounting and Economics 19 (2&3), 411–442. Beatty, A., Chamberlain, S., Magliolo, J., 1995b. Managing financial reports of commercial banks: the influence of taxes. Journal of Accounting Research 33 (2), 231–261. Berger, P., 1993. Explicit and implicit tax effects of the R&D tax credit. Journal of Accounting Research 31 (2), 131–171. Black, F., 1980. The tax consequences of long-run pension policy. Financial Analysts Journal 36, 1–28. Blouin, J., Raedy, J., Shackelford, D., 2000a. Capital gains holding periods and equity trading: evidence from the 1998 tax act. NBER working paper 7827. Blouin, J., Raedy, J., Shackelford, D., 2000b. Capital gains taxes and price and volume responses to quarterly earnings announcements. Working paper, University of North Carolina, Chapel Hill, NC. Blouin, J., Raedy, J., Shackelford, D., 2000c. The impact of capital gains taxes on stock price reactions to S&P 500 inclusion. NBER working paper W8011. Bowen, R., Pfeiffer, G., 1989. The year-end LIFO purchase decision: the case of Farmer Brothers Company. The Accounting Review 64, 152–171. Boynton, C., Dobbins, P., Plesko, G., 1992. Earnings management and the corporate alternative minimum tax. Journal of Accounting Research 30 (Suppl.), 131–153. Callihan, D., 1994. Corporate effective tax rates: a synthesis of the literature. Journal of Accounting Literature 13, 1–43. Callihan, D., White, R., 1999. An application of the Scholes and Wolfson model to examine the relation between implicit and explicit taxes and firm market structure. Journal of the American Taxation Association 21 (1), 1–19. Choi, W., Gramlich, J., Thomas, J., 1998. Potential errors in detection of earnings management: reexamining the studies of the AMT of 1986. Working paper, Columbia University, New York, NY. Clinch, G., Shibano, T., 1996. Differential tax benefits and the pension reversion decision. Journal of Accounting and Economics 21 (1), 69–106. Cloyd, B., Pratt, J., Stock, T., 1996. The use of financial accounting choice to support aggressive tax positions: public and private firms. Journal of Accounting Research 34 (1), 23–43. Collins, J., Kemsley, D., 2000. Capital gains and dividend capitalization in firm valuation: evidence of triple taxation. Accounting Review 75, 405–427. Collins, J., Shackelford, D., 1992. Foreign tax credit limitations and preferred stock issuances. Journal of Accounting Research 30 (Suppl.), 103–124. Collins, J., Shackelford, D., 1995. Corporate domicile and average effective tax rates: the cases of Canada, Japan, the United Kingdom, and the United States. International Tax and Public Finance 2 (1), 55–83. Collins, J., Shackelford, D., 1997. Global organizations and taxes: an analysis of the dividend, interest, royalty, and management fee payments between U.S. multinationals’ foreign affiliates. Journal of Accounting and Economics 24 (2), 151–173. Collins, J., Shackelford, D., 2000. Did the tax cost of corporate domicile change in the 1990s? A multinational analysis. Working paper, University of North Carolina, Chapel Hill, NC. Collins, J., Kemsley, D., Shackelford, D., 1995a. Tax reform and foreign acquisitions: a microanalysis. National Tax Journal 48 (1), 1–21. Collins, J., Shackelford, D., Wahlén, J., 1995b. Bank differences in the coordination of regulatory capital, earnings and taxes. Journal of Accounting Research 33 (2), 263–291. Collins, J., Geisler, G., Shackelford, D., 1997a. The effect of taxes, regulation, earnings, and organizational form on life insurers’ investment portfolio realizations. Journal of Accounting and Economics 24 (3), 337–361. Collins, J., Kemsley, D., Shackelford, D., 1997b. Transfer pricing and the persistent zero taxable income of foreign-controlled U.S. corporations. Journal of the American Taxation Association 19 (Suppl.), 68–83. Collins, J., Kemsley, D., Lang, M., 1998. Cross-jurisdictional income shifting and earnings valuation. Journal of Accounting Research 36 (2), 209–229. Collins, J., Hand, J., Shackelford, D., 2000. Valuing deferral: the effect of permanently reinvested foreign earnings on stock prices. In: Hines, J. (Ed.), International Taxation and Multinational Activity. University of Chicago Press, Chicago. Cushing, B., LeClere, M., 1992. Evidence on the determinants of inventory accounting policy choice. Accounting Review 67 (2), 355–366. DeAngelo, H., Masulis, R., 1980. Optimal capital structure under corporate and personal taxation. Journal of Financial Economics 7, 3–29. Dhaliwal, D., Trezvant, R., 1993. Capital gains and turn-of-the-year stock price pressures. Advances in Quantitative Analysis of Finance and Accounting 2, 139–154. Dhaliwal, D., Wang, S., 1992. The effect of book income adjustment in the 1986 alternative minimum tax on corporate financial reporting. Journal of Accounting and Economics 15 (1), 7–26. Dhaliwal, D., Trezvant, R., Wang, S., 1992. Taxes, investment related tax shields and capital structure. Journal of the American Taxation Association 14 (1), 1–21. Dhaliwal, D., Frankel, M., Trezvant, R., 1994. The taxable and book income motivations for a LIFO layer liquidation. Journal of Accounting Research 32 (2), 278–289. Dhaliwal, D., Erickson, M., Trezvant, R., 1999. A test of the theory of tax clienteles for dividend policies. National Tax Journal 52 (2), 179–194. Dhaliwal, D., Trezvant, R., Wilkins, M., 2000. Tests of a deferred tax explanation of the negative association between the LIFO reserve and firm value. Contemporary Accounting Research 17, 41–59. Doupich, N., Pincus, M., 1988. Evidence on the choice of inventory accounting methods: LIFO versus FIFO. Journal of Accounting Research 26 (1), 28–59. Doupich, N., Ronen, J., 1973. The effects of alternative inventory accounting methods: LIFO versus FIFO. Journal of Accounting Research 11, 191–211. Dunbar, A., 1996. The impact of personal credits on the progressivity of the individual income tax. Journal of the American Taxation Association 18 (1), 1–30. Engel, E., Erickson, M., Maydew, E., 1999. Debt–equity hybrid securities. Journal of Accounting Research 37 (2), 249–274. Erickson, M., 1998. The effect of taxes on the structure of corporate acquisitions. Journal of Accounting Research 36 (2), 279–298. Erickson, M., 2000. Discussion of “The effect of taxes on acquisition price and transaction structure.” Journal of the American Taxation Association 22 (Suppl.), 18–33. Erickson, M., Maydew, E., 1998. Implicit taxes in high dividend yield stocks. Accounting Review 73 (4), 435–458. Erickson, M., Wang, S., 1999. Exploiting and sharing tax benefits: Seagram and DuPont. Journal of the American Taxation Association 21, 35–54. Erickson, M., Wang, S., 2000. The effect of transaction structure on price: evidence from subsidiary sales. Journal of Accounting and Economics 30, 59–97. Fama, E., 1980. Agency problems and the theory of the firm. Journal of Public Economy 88 (2), 288–307. Fama, E., French, K., 1999. Disappearing dividends: changing firm characteristics or lower propensity to pay. Working paper, University of Chicago, Chicago, IL. Francis, J., Reiter, S., 1987. Determinants of corporate pension funding strategy. Journal of Accounting and Research 9 (1), 35–59. Frankel, M., Trezvant, R., 1994. The year-end LIFO inventory purchasing decision: an empirical test. Accounting Review 69 (2), 382–398. Gentry, W., Kemsley, D., Mayer, C., 2000. Are dividend taxes capitalized into share prices? Evidence from real estate investment trusts. Working paper, Columbia University, New York, NY. Gergen, M., Schmitz, P., 1997. The influence of tax law on securities innovation in the United States, 1981–1997. Tax Law Review 52 (2), 119–197. Givoly, D., Hayn, C., 1992. The valuation of the deferred tax liability: evidence from the stock market. Accounting Review 67, 394–410. Goolsbee, A., 2000. In a world without borders: the impact of taxes on Internet commerce. Quarterly Journal of Economics 115 (2), 561–576. Goolsbee, A., Maydew, E., 2000. Coveting thy neighbor’s manufacturing: the dilemma of state income apportionment. Journal of Public Economics 75, 125–143. Graham, J., 1996a. Debt and the marginal tax rate. Journal of Financial Economics 41, 41–74. Graham, J., 1996b. Proxies for the marginal tax rate. Journal of Financial Economics 42, 187–221. Graham, J., Lemmon, M., Schallheim, J., 1998. Debt, leases, taxes, and the endogeneity of corporate tax status. Journal of Finance 53 (1), 131–162. Gramlich, J., 1991. The effect of the alternative minimum tax book income adjustment on accrual decisions. Journal of the American Taxation Association 13 (1), 36–56. Greene, W., 1990. Econometric Analysis. MacMillan Publishing Company, New York, NY. Guenther, D., 1992. Taxes and organizational form: a comparison of corporations and master limited partnerships. Accounting Review 67 (1), 17–45. Guenther, D., 1994a. Earnings management in response to corporate tax rate changes: evidence from the 1986 Tax Reform Act. Accounting Review 69 (1), 230–243. Guenther, D., 1994b. The relation between tax rates and pretax returns: direct evidence from the 1981 and 1986 tax rate reductions. Journal of Accounting and Economics 18 (3), 379–393. Guenther, D., 2000. Investor reaction to anticipated 1997 capital gains tax rate reduction. Working paper, University of Colorado, Boulder, CO. Guenther, D., Sansing, R., 2000. Valuation of the firm in the presence of temporary book-tax differences: the role of deferred tax assets and liabilities. Accounting Review 75 (1), 1–12. Guenther, D., Trombley, M., 1994. The “LIFO Reserve” and the value of the firm: theory and evidence. Contemporary Accounting Research 10, 433–452. Guenther, D., Willenborg, M., 1999. Capital gains tax rates and the cost of capital for small business: evidence from the IPO market. Journal of Financial Economics 53, 385–408. Guenther, D., Maydew, E., Nutter, S., 1997. Financial reporting, tax costs, and book-tax conformity. Journal of Accounting and Economics 23 (3), 225–248. Gupta, S., 1995. Determinants of the choice between partial and comprehensive income tax allocation: the case of the domestic international sales corporation. Accounting Review 70 (3), 489–511. Gupta, S., Mills, L., 1999. Multistate tax planning: benefits of multiple jurisdictions and tax planning assistance. Working paper, University of Arizona, Tempe, AZ. Gupta, S., Newberry, K., 1997. Determinants of the variability in corporate effective tax rates: evidence from longitudinal data. Journal of Accounting and Public Policy 16 (1), 1–34. Halperin, R., Srinidhi, B., 1987. The effects of the U.S. income tax regulations’ transfer pricing rules on allocative efficiency. Accounting Review 62, 686–706. Halperin, R., Srinidhi, B., 1996. U.S. income tax transfer pricing rules for intangibles as approximations of arm’s length pricing. Accounting Review 71, 61–80. Hand, J., 1993. Resolving LIFO uncertainty: a theoretical and empirical reexamination of 1974–75 LIFO adoptions and nonadoptions. Journal of Accounting Research 31 (1), 21–49. Harris, D., 1993. The impact of U.S. Tax law revisions on multi-national corporations’ capital location and income-shifting decisions. Journal of Accounting Research 31 (Suppl.), 111–140. Harris, D., Livingstone, J., 1999. Federal tax legislation as an implicit contracting cost benchmark: the definition of excessive executive compensation. Working paper, Syracuse University, Syracuse, NY. Harris, D., Sansing, R., 1998. Distortions caused by the use of arm’s length transfer prices. Journal of the American Taxation Association 20 (Suppl.), 40–50. Harris, L., Gurel, E., 1986. Price and volume effects associated with changes in the S&P 500 list: new evidence for the existence of price pressures. Journal of Finance 41, 815–829. Harris, T., Kemsley, D., 1999. Dividend taxation in firm valuation: new evidence. Journal of Accounting Research 37 (2), 275–291. Harris, T., Hubbard, R., Kemsley, D., 2001. The share price effects of dividend taxes and tax imputation credits. Journal of Public Economics 79, 569–596. Hayn, C., 1989. Tax attributes as determinants of shareholder gains in corporate acquisitions. Journal of Financial Economics 23, 121–153. Henning, S., Shaw, W., 2000. The effect of the tax deductibility of goodwill on purchase price allocations. Journal of the American Taxation Association 22 (1), 18–37. Henning, S., Shaw, W., Stock, T., 2000. The effect of taxes on acquisition prices and transaction structure. Journal of the American Taxation Association 22 (Suppl.), 1–17. Himmelberg, C., Hubbard, G., Palia, D., 1999. Understanding the determinants of managerial ownership and the link between ownership and performance. Journal of Financial Economics 53, 353–384. Hines, J., 1997. Tax policy and the activities of multinational corporations. In: Auerbach, A. (Ed.), Fiscal Policy: Lessons from Economic Research. MIT Press, Cambridge, MA, pp. 401–445. Hite, G., Long, M., 1982. Taxes and executive stock options. Journal of Accounting and Economics 4, 3–14. Hunt, A., Moyer, S., Shevlin, T., 1996. Managing interacting accounting measures to meet multiple objectives: a study of LIFO firms. Journal of Accounting and Economics 21 (3), 339–374. Iyer, G., Seetharaman, A., 2000. An evaluation of alternative procedures for measuring horizontal equity. Journal of the American Taxation Association 22 (1), 89–110. Jacob, J., 1996. Taxes and transfer pricing: income shifting and the volume of intrafirm transfers. Journal of Accounting Research 34 (2), 301–312. Jenkins, N., Pincus, M., 1998. LIFO versus FIFO: updating what we have learned. Working paper, University of Iowa, Iowa City, IA. Jennings, R., Simko, P., Thompson, R., 1996. Does LIFO inventory accounting improve the income statement at the expense of the balance sheet? Journal of Accounting Research 34, 573–608. Johnson, M., Nabar, S., Porter, S., 1999. Determinants of corporate response to Section 162(m). Working paper, University of Michigan, Ann Arbor, MI. Johnson, W., Dhaliwal, D., 1988. LIFO abandonment. Journal of Accounting Research 26 (2), 236–272. Kang, S., 1993. A conceptual framework for the stock price effects of LIFO tax benefits. Journal of Accounting Research 31 (1), 50–61. Ke, B., 2000. Using deductible compensation to shift income between corporate and shareholder tax bases: evidence from privately-held property-liability insurance companies. Working paper, Pennsylvania State University, University Park, PA. Ke, B., Petroni, K., Shackelford, D., 2000. The impact of state taxes on self-insurance. Journal of Accounting and Economics 30 (1), 99–122. Keating, S., Zimmerman, J., 1999. Depreciation policy changes: tax, earnings management, and investment opportunity incentives. Journal of Accounting and Economics 28, 359–389. Keating, S., Zimmerman, J., 2000. Asset lives for financial reporting purposes: capital budgeting, tax and discretionary factors. Working paper, University of Rochester, Rochester, NY. Kemseley, D., 1998. The effect of taxes on production location. Journal of Accounting Research 36 (2), 921–941. Kern, B., Morris, M., 1992. Taxes and firm size: the effect of tax legislation during the 1980s. Journal of the American Taxation Association 14 (1), 80–96. Kinney, M., Swanson, E., 1993. The accuracy and adequacy of tax data in Compustat. Journal of the American Taxation Association 15 (2), 121–135. Klassen, K., 1997. The impact of inside ownership concentration on the tradeoff between financial and tax reporting. Accounting Review 72 (3), 455–474. Klassen, K., Shackelford, D., 1998. State and provincial corporate tax planning: income shifting and sales apportionment factor management. Journal of Accounting and Economics 25 (3), 385–406. Klassen, K., Lang, M., Wolfson, M., 1993. Geographic income shifting by multinational corporations in response to tax rate changes. Journal of Accounting Research 31 (Suppl.), 141–173. Landsman, W., Shackelford, D., 1995. The lock-in effect of capital gains taxes: evidence from the RJR Nabisco leveraged buyout. National Tax Journal 48, 245–259. Landsman, W., Shackelford, D., Yetman, R., 2001. The determinants of capital gains tax compliance: evidence from the RJR Nabisco leveraged buyout. Journal of Public Economics, forthcoming. Lanen, W., Thompson, R., 1988. Stock price reactions as surrogates for the net cash flow effects of corporate policy decisions. Journal of Accounting and Economics 10, 311–334. Lang, M., Shackelford, D., 2000. Capitalization of capital gains taxes: evidence from stock price reactions to the 1997 rate reductions. Journal of Public Economics 76, 69–85. Lang, M., Maydew, E., Shackelford, D., 2001. Bringing down the other Berlin wall: Germany’s repeal of the corporate capital gains tax. Working paper, University of North Carolina, Chapel Hill, NC. Lightner, T., 1999. The effect of the formulatory apportionment system on state-level economic development and multijurisdictional tax planning. Journal of the American Taxation Association 21 (Suppl.), 42–57. Lopez, T., Regier, P., Lee, T., 1998. Identifying tax-induced earnings management around TRA 86 as a function of prior tax-aggressive behavior. Journal of the American Taxation Association 20 (2), 37–56. Lynch, A., Mendenhall, R., 1997. New evidence on stock price effects associated with changes in the S&P 500 index. Journal of Business 70, 351–383. Mackie-Mason, J., 1990. Do taxes affect corporate financing decisions. Journal of Finance 45, 1471–1493. Maddala, G.S., 1991. A perspective on the use of limited-dependent variables models in accounting research. Accounting Review 66 (4), 788–807. Maude, S., Omer, T., 1994. The effect of taxes on switching stock option plans: evidence from the Tax Reform Act of 1969. Journal of the American Taxation Association 16, 24–42. Manzon, G., 1992. Earnings management of firms subject to the alternative minimum tax. Journal of the American Taxation Association 14 (2), 88–111. Manzon, G., 1994. The role of taxes in early debt retirement. Journal of the American Taxation Association 16 (1), 87–100. Matsunaga, S., Shevlin, T., Shores, D., 1992. Disqualifying dispositions of incentive stock options: tax benefits versus financial reporting costs. Journal of Accounting Research 30 (Suppl.), 37–76. Maydew, E., 1997. Tax-induced earnings management by firms with net operating losses. Journal of Accounting Research 35 (1), 83–96. Maydew, E., Schipper, K., Vincent, L., 1999. The impact of taxes on the choice of divestiture method. Journal of Accounting and Economics 28, 117–150. Mazur, M., Scholes, M., Wolfson, M., 1986. Implicit taxes and effective tax burdens. Working paper, Stanford University, Stanford, CA. Mikhail, M., 1999. Coordination of earnings, regulatory capital and taxes in private and public companies. Working paper, MIT, Cambridge, MA. Miller, G., Skinner, D., 1998. Determinants of the valuation allowance for deferred tax assets under SFAS No. 109. Accounting Review 73 (2), 213–233. Miller, M., 1977. Debt and taxes. Journal of Finance 32, 261–276. Miller, M., Scholes, M., 1978. Dividends and taxes. Journal of Financial Economics 6, 333–364. Mills, L., 1998. Book–tax differences and Internal Revenue Service adjustments. Journal of Accounting Research 36 (2), 343–356. Mills, L., Newberry, K., 2000. Cross-jurisdictional income shifting by foreign-controlled U.S. corporations. Working paper, University of Arizona, Tucson, AZ. Mills, L., Newberry, K., Novack, G., 2000. Reducing classification errors in Compustat net operating loss data: insights from U.S. tax return data. Working paper, University of Arizona, Tucson, AZ. Mittelstaedt, F., 1989. An empirical analysis of factors underlying the decision to remove excess assets from overfunded pension plans. Journal of Accounting and Economics 11 (4), 369–418. Modigliani, F., Miller, M., 1958. The cost of capital, corporation finance, and the theory of investment. American Economic Review 53, 261–297. Modigliani, F., Miller, M., 1963. Corporate income taxes and the cost of capital: a correction. American Economic Review 53, 433–443. Myers, M., 2000. The impact of taxes on corporate defined benefit plan asset allocation. Working paper, University of Chicago, Chicago, IL. Myers, S., 1977. Determinants of corporate borrowing. Journal of Financial Economics 3, 147–175. Myers, S., 1984. The capital structure puzzle. Journal of Finance 39 (3), 575–592. Myers, S., Majluf, N., 1984. Corporate financing and investment decisions when firms have information that investors do not have. Journal of Financial Economics 13, 187–221. Newberry, K., 1998. Foreign tax credit limitations and capital structure decisions. Journal of Accounting Research 36 (1), 157–166. Newberry, K., Dhaliwal, D., 2000. Cross-jurisdictional income shifting by U.S. multinationals: evidence from international bond offerings. Working paper, University of Arizona, Tucson, AZ. Ohlson, J., 1995. Earnings, book values and dividends in security valuation. Contemporary Accounting Research 11, 661–687. Olhoff, S., 1999. The tax avoidance activities of U.S. multinational corporations. Working paper, University of Iowa, Iowa City, IA. Omer, T., Molloy, K., Ziebart, D., 1991. Using financial statement information in the measurement of effective corporate tax rates. Journal of the American Taxation Association 13 (1), 57–72. Omer, T., Plesko, G., Shelley, M., 2000. The influence of tax costs on organizational choice in the natural resource industry. Journal of the American Taxation Association 22, 38–55. Palepu, K., Bernard, V., Healy, P., 1996. Business analysis and valuation. Southwestern Publishing, Cincinnati, OH. Petroni, K., Shackelford, D., 1995. Taxation, regulation, and the organizational structure of property-casualty insurers. Journal of Accounting and Economics 20 (3), 229–253. Petroni, K., Shackelford, D., 1999. Managing annual accounting reports to avoid state taxes: an analysis of property-casualty insurers. Accounting Review 74 (3), 371–393. Phillips, J., 1999. Corporate tax planning effectiveness: the role of incentives. Working paper, University of Connecticut, Storrs, CT. Plesko, G., 1999. An evaluation of alternative measures of corporate tax rates. Working paper, MIT, Boston, MA. Porcano, T., 1986. Corporate tax rates: progressive, proportional, or regressive. Journal of the American Taxation Association 8 (1), 17–31. Poterba, J., Weisbrenner S., 2001. Capital gains tax rules, tax loss trading, and turn-of-the-year returns. Journal of Finance 56, 353–368. Reese, W., 1998. Capital gains taxation and stock market activity: evidence from IPOs. Journal of Finance 53, 1799–1820. Ricks, W., 1986. Firm size effects and the association between excess returns and LIFO tax savings. Journal of Accounting Research 24 (1), 206–216. Sansing, R., 1998. Valuing the deferred tax liability. Journal of Accounting Research 36 (2), 1998. Sansing, R., 1999. Relationship-specific investments and the transfer pricing paradox. Review of Accounting Studies 4, 119–134. Sansing, R., 2000. Joint ventures between non-profit and for-profit organizations. Journal of the American Taxation Association 22 (Suppl.), 76–88. Scholes, M., Wolfson, M., 1987. Taxes and organization theory. Working paper, Stanford University, Stanford, CA. Scholes, M., Wolfson, M., 1992. Taxes and Business Strategy: a Planning Approach. Prentice-Hall, Inc., Englewood Cliffs, NJ. Scholes, M., Wilson, P., Wolfson, M., 1990. Tax planning, regulatory capital planning, and financial reporting strategy for commercial banks. Review of Financial Studies 3, 625–650. Scholes, M., Wilson, P., Wolfson, M., 1992. Firms’ responses to anticipated reductions in tax rates: the Tax Reform Act of 1986. Journal of Accounting Research 30 (Suppl.), 161–191. Scholes, M., Wolfson, M., Erickson, M., Maydew, E., Shevlin, T., 2001. Taxes and Business Strategy: a Planning Approach, 2nd Edition. Prentice-Hall, Inc., Upper Saddle River, NJ. Schipper, K., Smith, A., 1991. Effects of management buyouts on corporate interest and depreciation tax deductions. Journal of Law and Economics 34, 295–341. Scott, J., 1977. Bankruptcy, secured debt, and optimal capital structure. Journal of Finance 32, 1–19. Seetharaman, A., Iyer, G., 1995. A comparison of alternative measures of tax progressivity: the case of the child and dependent care credit. Journal of the American Taxation Association 17 (1), 42–70. Seida, J., Wempe, W., 2000. Do capital gain tax rate increases affect individual investors’ trading decisions? Journal of Accounting and Economics 30, 33–57. Shackelford, D., 1991. The market for tax benefits: evidence from leveraged ESOPs. Journal of Accounting and Economics 14 (2), 117–145. Shackelford, D., 1993. Discussion of ‘The impact of US tax law revision on multinational corporations’ capital location and income shifting decisions’ and ‘Geographic income shifting by multinational corporations in response to tax rate changes.’. Journal of Accounting Research 31 (Suppl.), 174–182. Shackelford, D., 2000. Stock market reaction to capital gains tax changes: empirical evidence from the 1997 and 1998 tax acts. In: Poterba, J. (Ed.), Tax Policy and the Economy, Vol. 14. National Bureau of Economic Research, MIT Press, Cambridge, MA, pp. 67–92. Shackelford, D., Slemrod, J., 1998. The revenue consequences of using formula apportionment to calculate U.S. and foreign-source income: a firm-level analysis. International Tax and Public Finance 5 (1), 41–59. Shackelford, D., Verrecchia, R., 1999. Intertemporal tax discontinuities. NBER working paper 7451, Cambridge, MA. Shelley, M., Omer, T., Atwood, T., 1998. Capital restructuring and accounting compliance costs: the case of publicly traded partnerships. Journal of Accounting Research 36 (2), 365–378. Shevlin, T., 1987. Taxes and off-balance-sheet financing: research and development limited partnerships. Accounting Review 52 (3), 480–509. Shevlin, T., 1990. Estimating corporate marginal tax rates with asymmetric tax treatment of gains and losses. Journal of the American Taxation Association 11 (1), 51–67. Shevlin, T., 1999. A critique of Plesko’s “An evaluation of alternative measures of corporate tax rates.” Working paper, University of Washington, Seattle, WA. Shevlin, T., Porter, S., 1992. The corporate tax code-buck in 1987: some further evidence. Journal of the American Taxation Association 14 (1), 58–79. Shleifer, A., 1986. Do demand curves for stocks slope down? Journal of Finance 61, 579–590. Single, L., 1999. Tax holidays and firms’ subsidiary location decisions. Journal of the American Taxation Association 21 (2), 21–34. Smith, J., 1997. The effect of the Tax Reform Act of 1986 on the capital structure of foreign subsidiaries. Journal of the American Taxation Association 19 (2), 1–18. Stickney, C., Weil, R., Wolfson, M., 1983. Income taxes and tax-transfer leases: General Electric’s accounting for a Molotov cocktail. Accounting Review 58, 439–459. Sunder, S., 1973. Relationship between accounting changes and stock prices: problems of measurement and some empirical evidence. Journal of Accounting Research 11 (Suppl.), 1–45. Sunder, S., 1975. Stock price and risk related to accounting changes in inventory valuation. Accounting Review 305–316. Sweeney, A., 1994. Debt-covenant violations and managers’ accounting responses. Journal of Accounting and Economics 17 (3), 281–308. Tepper, I., 1981. Taxation and corporate pension policy. Journal of Finance 36, 1–14. Thomas, J., 1988. Corporate taxes and defined benefit pension plans. Journal of Accounting and Economics 10 (3), 199–237. Thomas, J., 1989. Why do firms terminate their overfunded pension plans? Journal of Accounting and Economics 11 (4), 361–398. Trezevant, R., 1992. Debt financing and tax status: tests of the substitution effect and the tax exhaustion hypothesis using firms’ responses to the Economic Recovery Tax Act of 1981. Journal of Finance 47, 1557–1568. Wang, S., 1991. The relation between firm size and effective tax rates: a test of firm’s political success. Accounting Review 66 (1), 158–169. Wang, S., 1994. The relationship between financial reporting practices and the 1986 alternative minimum tax. Accounting Review 69 (3), 495–506. Weaver, C., 2000. Divestiture structure and tax attributes: evidence from the Omnibus Budget Reconciliation Act of 1993. Journal of the American Taxation Association 22 (Suppl.), 54–71. Wilkie, P., Limberg, S., 1990. The relationship between firm size and effective tax rate: a reconciliation of Zimmerman (1983) and Porcano (1986). Journal of the American Taxation Association 11 (1), 76–91. Wilkie, P., Limberg, S., 1993. Measuring explicit tax (dis)advantage for corporate taxpayers: an alternative to average effective tax rates. Journal of the American Taxation Association 15 (1), 46–71. Wilson, P., 1993. The role of taxes in location and sourcing decisions. In: Giovannini, A., Hubbard, G., Slemrod, J. (Eds.), Studies in International Taxation. University of Chicago Press, Chicago, pp. 195–231. Wolfson, M., 1985. Empirical evidence of incentive problems and their mitigation in oil and gas tax shelter programs. In: Pratt, J., Zeckhauser, R. (Eds.), Principals and Agents: the Structure of Business. HBS Press, Boston, pp. 101–125. Yetman, R., 2000. Tax planning by not-for-profit organizations. Working paper, University of Iowa, Iowa City, IA. Zimmerman, J., 1983. Taxes and firm size. Journal of Accounting and Economics 5, 119–149.
Hindsight is 2020? Lessons in global health governance one year into the pandemic Citation for published version: Hassan, I, Mukaigawara, M, King, L, Fernandes, G & Sridhar, D 2021, 'Hindsight is 2020? Lessons in global health governance one year into the pandemic', Nature Medicine, vol. 27, no. 3, pp. 396-400. https://doi.org/10.1038/s41591-021-01272-2 Digital Object Identifier (DOI): 10.1038/s41591-021-01272-2 Link: Link to publication record in Edinburgh Research Explorer Document Version: Peer reviewed version Published In: Nature Medicine General rights Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact firstname.lastname@example.org providing details, and we will remove access to the work immediately and investigate your claim. Hindsight is 2020? Lessons in Global Health Governance One Year into the Pandemic Dr. Ines Hassan\textsuperscript{1}, Dr. Mitsuru Mukaigawara\textsuperscript{2,3}, Ms. Lois King\textsuperscript{1}, Dr. Genevie Fernandes\textsuperscript{1}, Prof Devi Sridhar\textsuperscript{1}. email@example.com 1. Global Health Governance Programme, Edinburgh Medical School, University of Edinburgh, Edinburgh, UK 2. Harvard Kennedy School, Cambridge, Massachusetts 3. Division of Infectious Diseases, Department of Medicine, Okinawa Chubu Hospital, Okinawa, Japan Abstract Fourteen months into the SARS-CoV-2 pandemic, we identify key lessons in the global and national responses to the pandemic. The World Health Organization has played a pivotal technical, normative and coordinating role, but has been constrained by its lack of authority over sovereign member states. Many governments also mistakenly attempted to manage COVID-19 like influenza, resulting in repeated lockdowns, high excess morbidity and mortality, and poor economic recovery. Despite the incredible speed and approval of effective and safe vaccines, the emergence of new SARS-CoV-2 variants means that all countries will rely on a globally coordinated public health effort to defeat this pandemic for several years. Introduction It has now been just over one year since the first two cases of coronavirus disease 2019 (COVID-19) were confirmed in two Chinese nationals staying at a hotel in York, England on 31st January 2020\textsuperscript{1}. On 26th January 2021, the death toll from COVID-19 in the United Kingdom had surpassed 100,000 and there were reportedly over 30,000 daily cases of the disease, with an estimated 1 in 10 of people going on to experience the enduring effects of “long COVID”\textsuperscript{2}. The global death toll has just reached 2.1 million\textsuperscript{3}. However, around the world, a varied picture has emerged\textsuperscript{3–5}. Countries like China, Taiwan, New Zealand and Australia have managed to eliminate or get close to elimination of their epidemics caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) relatively well\textsuperscript{3–5}. Others such as Hong Kong, South Korea, Singapore, Finland and Norway have managed to control it at low levels\textsuperscript{3}. Sadly, both the US and the UK are still battling high numbers of daily cases, tens of thousands of deaths and an exhausted health workforce and overstretched health services \textsuperscript{3,6,7}. As the virus proliferated across the globe, it also revealed critical vulnerabilities in our global and national health governance systems that have resulted in these inadequate outbreak responses\textsuperscript{8,9}. In this paper, we explore what we now know about the virus, identify key lessons learned about WHO and national governance and how this has impacted pandemic preparedness and response. What do we know scientifically? Since January 2020, a massive surge of research into COVID-19 has enabled the scientific and medical community to better understand how to manage and ultimately eliminate the virus through pharmaceutical and public health interventions\(^{10}\). Some of the key findings a year on are that transmission occurs through droplets and aerosols spread through breathing, coughing, speaking and sneezing\(^{11}\). Stopping the spread of COVID-19 requires people to avoid mixing through restrictions on social and economic life\(^{12,13}\). We have learned that COVID-19 causes more severe symptoms and death in those that are older\(^{14}\), who have underlying health issues (such as cardiovascular diseases and obesity) or are immunocompromised (such as malignancies and diabetes mellitus)\(^{15}\). We have learned that certain genetic markers can identify those more susceptible to respiratory failure\(^{16}\). We also have been learning about the long-term effects of COVID-19, so called “long COVID”, and the morbidity attached to having this virus\(^{17}\). Even after the recovery from acute illness caused by COVID-19, some patients continue to experience symptoms such as dyspnea and fatigue for weeks or months\(^{17}\). Also, the emergence of hyperinflammatory symptoms in children (multisystem inflammatory syndrome, or MIS-C) was reported to coincide with regional COVID-19 epidemics\(^{18}\). We have learned that immunity lasts at least 8 months\(^{19}\). We also have three licensed vaccines in the UK, which are already being rolled out and are effective at reducing severe COVID-19; although we don’t know how long immunity will last or whether they stop people being infectious\(^{20}\). We have learned that the virus can mutate into various strains that can be more transmissible, more severe in health outcomes and possibly evade our natural or vaccine-induced immunity to the original SARS-CoV-2, requiring governments to plan for a cat and mouse game between vaccines and variants\(^{21}\). Role of WHO This pandemic has highlighted the interdependence of countries like never before, and most importantly the need for a globally coordinated governance response\(^{22}\). As countries attempted to respond to the COVID-19 outbreaks, the World Health Organization (WHO) was thrust into the spotlight as many countries looked to it for leadership and guidance\(^{23}\). In the process, it has faced inevitable criticism from various stakeholders. This criticism has unveiled – not for the first time – some misinterpretation of WHO’s mandate, its authority – or lack of – over its Member States, and a number of organisational and legal instrument constraints that have impacted pandemic preparedness and response\(^{8,24-26}\). WHO has three key roles in addressing health emergencies: coordination, normative and technical steering\(^{27}\). As the United Nations’ only organization focused on health, it has a mandate to be “the directing and coordinating authority in international health work\(^{27}\). During the COVID-19 outbreak, it has convened the 73\(^{rd}\) World Health Assembly, and adopted a resolution to bring the world together to fight the pandemic and called for equitable access to all essential health products, such as vaccines, tests and treatments through the Access to COVID-19 Tools (ACT) Accelerator\(^{28}\). It has also enabled WHO to assemble the COVAX Facility as the vaccine pillar of the ACT Accelerator with other global actors, a mechanism designed to ensure timely access to a diverse set of vaccines for at least 20% of countries’ populations and the COVID-19 technology access pool (C-Tap), a platform to share patent-protected trial data on emerging treatments\textsuperscript{29}. There has been some success: to date, two billion doses of approved and pipeline vaccines have been pledged by wealthy nations, the European Union Commission, The Bill and Melinda Gates Foundation among others\textsuperscript{30}. However, as of January 2021, while vaccine roll-out is fully underway in many wealthy nations like the UK and the US, no COVID-19 vaccines have been administered in the continent of Africa and in other low and middle income countries (LMICs)\textsuperscript{31}. This highlights the limited accountability of COVAX participants and perhaps inefficient incentives for wealthy nations, which have secured in some cases more doses than required to protect their populations\textsuperscript{32–34}. Furthermore, by January 2021 C-Tap had attracted zero contributions, nine months after its launch\textsuperscript{33}. Through the International Health Regulations (IHR) (2005), WHO also has a “central and historic responsibility” to manage the “global regime for the control of the international spread of disease”\textsuperscript{35}. In its normative role, it has the “power to shape or influence global rules and norms and monitor compliance”\textsuperscript{36}. It has arguably fulfilled a large part of this role by providing State-endorsed guidance and by setting norms and standards on outbreak preparedness and response, which include making use of measures such as border controls, finding cases, prioritising testing, contact tracing, isolating carriers of the virus and their contacts among other interventions\textsuperscript{35}. Critically, this guidance ensured that China reported the presence of a novel pathogen on 30\textsuperscript{th} December 2019, and enabled WHO to declare a Public Health Emergency of International Concern (PHEIC) – the highest level of alert – one month later on 30\textsuperscript{th} January 2020, and notably 111 days before the UN Security Council adopted a resolution stating that the COVID-19 pandemic threatened international peace and security\textsuperscript{29,37}. Four days later, it published a global strategy to tackle the pandemic, much of which remains valid today\textsuperscript{29}. Moreover, within its technical capacity, it was able to send an international team on mission to China in February 2020 to collect key data on how the virus was spreading, the emerging disease profile and to understand lessons learned from policy responses in China up until that point\textsuperscript{38}. Invaluable knowledge that was shared with the rest of the world in the same month. Furthermore, through its technical role, WHO has provided daily press briefings on a variety of scientific and policy topics including up-to-date epidemiology data, the nature of SARS-CoV-2 transmission and appropriate non-pharmaceutical intervention guidance, since a Public Health Emergency of International Concern (PHEIC) was declared by the WHO\textsuperscript{39}. However, there was some criticism that the PHEIC should have been called earlier and that WHO’s diplomatic but perhaps opaque approach in working with China to investigate the source of the outbreak and rapidly share information demonstrated a lack of authority over Member States\textsuperscript{8}. This was further publicised as a result of the Trump administration’s threat to withdraw from WHO\textsuperscript{40}. However, the IHR only affords WHO normative power, a “soft” power that relies on Member States’ cooperation and cannot be legally enforced\textsuperscript{36}. Throughout the pandemic WHO has struggled with country cooperation, namely because it does not have an official operational role in outbreak response\textsuperscript{41}. This is also demonstrated in the failure of notable countries such as the UK and the US to implement some of WHO’s key public health guidance, such as ‘testing, testing, testing’, the provision of personal protective equipment and the importance of ramping up hospital capacity\textsuperscript{42}. Furthermore, although WHO’s technical capabilities during the pandemic are mostly to be lauded, it was slow to offer some key recommendations. Namely, on the potential risk of airborne transmission of SARS-CoV-2 under special circumstances (enclosed spaces, prolonged exposure, and inadequate ventilation\textsuperscript{43}), the important role that masks\textsuperscript{44} have in preventing transmission and use of border controls. History has shown us that the risk of doing nothing while waiting for the perfect data outweighs the risk of acting quickly with imperfect data. As Dr Mike Ryan, the executive director of the WHO’s emergencies Programme said, it is pertinent that everybody acts fast during an infectious disease outbreak and that we do not wait for “perfect data”\textsuperscript{45}. Other technical areas where it fell short is in that its preparedness metrics\textsuperscript{46} seemingly did not account for variations in country leadership and political will, which have clearly had a big impact on the way countries have responded to the pandemic. Also, that it did not sufficiently focus on policies to minimise the impact that outbreaks have on increasing social, racial and health inequalities\textsuperscript{35}. One major factor that has an impact on all of these coordination, normative and technical shortcomings is the limited funding available to WHO to operate optimally\textsuperscript{47}. Critically, it has been suggested that the health and economic fallout of this unprecedented pandemic may spur new opportunities for more stable funding that might result in transformational change\textsuperscript{48}. **National Governance: Best Practice** By the end of March 2020, almost all countries around the world had introduced nation-wide public health measures aimed at containing the spread of SARS-CoV-2\textsuperscript{49}. However, the measures used and subsequently the health and economic outcome of the response varied drastically\textsuperscript{50}. This variation in response seems to reflect past experience in managing infectious disease outbreaks, societal values, long-term investment in healthcare and critically the political will of the government in power. **Overall Strategic differences** In Europe and the US, a combination of mitigation and suppression strategies have largely been used at various points in time. This is despite WHO advising countries to follow the model of elimination from February, 2020\textsuperscript{51}. The UK’s initial strategy was based largely on a response to pandemic flu, and government communications made several mentions of mild flu and cold-like symptoms as a result of COVID-19 for the majority of the population\textsuperscript{52}. Elimination of the virus was touted as an impossible notion; that the best course of action was to shield the vulnerable as the virus made its way through the population to avoid overwhelming its health services and in an attempt to achieve so-called ‘herd immunity’\textsuperscript{53}. While the successful use of measures such as social distancing and home isolation in China were noted by government advisors, it was perceived as postponing the inevitable\textsuperscript{54}. This over-reliance on the flu model painted an inaccurate picture of how COVID-19 is transmitted: as COVID-19 is more contagious than the flu, it leads to super-spreading events in crowded places. This evolved into a suppression strategy, where targeted health interventions have been used to reduce COVID-19 cases to “acceptable” levels for example by implementing mass testing, lockdowns and the use of masks in indoor public spaces\textsuperscript{55}. In contrast, in New Zealand, Taiwan, Vietnam, South Korea, Australia and China, effort was taken to try to rapidly exclude community transmission of the virus using an elimination strategy. As Jacinda Arden, the prime minister of New Zealand recently said, even if elimination is not achieved, the approach “will result in a reduction of lives lost in the process”\textsuperscript{56}. As the world has witnessed a close return to normalcy - at least within national borders - in countries that sought an elimination approach, there appears to be greater enthusiasm to pursue this approach among academics and politicians\textsuperscript{4}. In contrast, those who didn’t have succumbed to repeat national lockdowns throughout the year, high mortality rates, long-term health consequences in survivors - up to 10% in the UK - indirect health impacts, long-term economic loss, and an increase in social and health inequalities\textsuperscript{57}. One factor that has impacted the strategies employed by governments is the relatively low Case Fatality Rate (CFR) of COVID-19 at 2%\textsuperscript{58}. The CFRs of severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MERS), are much higher than that of COVID-19 at 9-10% and 36%, respectively\textsuperscript{58}. Based on past experiences, most countries would have adopted an elimination strategy if the CFR for COVID-19 was higher, because it would have been impossible to let SARS-CoV-2 spread within communities\textsuperscript{51}. However, CFR is a deceptive metric because the underlying SARS-CoV-2 virus spreads more easily among people, leading to more cases. Hospitalisation rates are a better measure of COVID-19 prevalence because it also reveals the level of community spread, but also gives insight into hospital capacity\textsuperscript{59}. **Public health measures** We also now know that effective use of test, trace and isolate (TTI) programmes, where infected people and their contacts are rapidly identified and provided financial support to isolate during the incubation period of the virus along with border controls and now an efficient and equitable roll-out of emerging vaccines are key to controlling this virus. In East Asian and Pacific countries, TTI, the use of strict border measures and good voluntary public health guidance was central to their elimination strategies, allowing them to rapidly manage local flare-ups. It also resulted in relatively few lockdowns\textsuperscript{50}. In Hong Kong, uptake of testing was encouraged by paying people to get tested. Germany also had a relatively lower CFR compared to its European counterparts like Italy and the UK, in part because of its early and broad testing strategy\textsuperscript{5}. The development of new vaccines has provided governments with an additional tool to protect its population. Governments in high-income countries, in particular, have embarked on mass efforts to roll-out the vaccine, starting with their most vulnerable groups. By mid-January 2021, Israel had administered the first dose of the vaccine to over 25% of its population, and 75% of those over the age of 60 years by mid-January. There are early indications that this is having a positive impact with a reduction from 30% to 7% in the number of critically ill patients in this age bracket two weeks post-vaccination\textsuperscript{60}. However, questions on the protection provided until the second dose is administered remains. Additionally, inequitable access, both globally and nationally are an issue; in Israel cities of a lower socio-economic status had administered fewer vaccinations than their wealthier counterparts\textsuperscript{61}. What is clear is that a fast roll-out is essential to stopping community transmission, ultimately reducing the likely emergence of new variants of the SARS-CoV-2 virus. **Social inequalities** The disproportionate impact that this pandemic has had on vulnerable populations and ethnic minority groups around the world must also not be overlooked\textsuperscript{62}. This is typically a result of riskier work and living conditions, limited access to protective wear - and in some countries treatments - and the limited availability of financial protection to ensure that key public health measures such as isolation and distancing can be implemented\textsuperscript{62}. Governments have learned - often as a result of a public outcry - that identifying these vulnerable groups quickly, and implementing tailored interventions to reduce the risk of infection in these groups is critical. For example, in Hong Kong people were paid to encourage testing and in the UK mass testing was eventually introduced into care homes to try to identify and isolate cases quickly\textsuperscript{63,64}. Other key lessons are that elimination is achievable if swift political commitment is made early on in an outbreak, and by accepting short-term stringent public health measures, viral community transmission is reduced, fewer covid-19 cases are detected and economic loss is minimal\textsuperscript{4}. At the global level, however, we should also recognise that not every country is able to implement the same public health measures. Countries like Japan could not legally enforce strict containment measures because of its infringement on human rights\textsuperscript{65}. Furthermore, political disorder and the aggressive use of force by the police in Nigeria intensified when strict public health interventions were enforced to limit protests\textsuperscript{66}. **Leadership and communication** Clear and evidence-based communication during an outbreak is critical to build trust with the public and to ensure adherence to public health measures and successful containment. Most importantly, understanding a government’s definition of a successful outcome and the strategy employed to achieve this needs to be well-defined\textsuperscript{67}. Some leaders seem to have got the balance right, for example, in New Zealand, South Korea, Taiwan and Senegal while others have struggled, for example, in the US and the UK. As the pandemic has unfolded, knowledge about the virus, how to manage it and the interventions available to us have rapidly evolved. Some governments have been good at communicating uncertainty and necessary changes in strategy when better options have become clear. For instance, in New Zealand, after the PHEIC was declared by WHO, the government communicated that an elimination strategy was being adopted\textsuperscript{68}. In the US and the UK, it has at times been unclear what success would look like, how this is measured, and what approach is being adopted: exclusion, elimination, suppression or containment of the virus\textsuperscript{4}. In the US, the Trump administration regularly ignored scientific evidence and the Federal government “abandoned disease control to the states\textsuperscript{69}”, resulting in a massive failure in handling COVID-19. In the British context, sometimes questions about changes were often met with protestations of having “world-beating” approaches, a symptom of British exceptionalism that underestimated the virus in the first place\textsuperscript{70}. Moreover, some government ministers in the UK recently announced that NHS hospitals were full because the public was not adhering to public health measures\textsuperscript{71}. Shifting responsibility to individuals alone through such disparaging messaging can lead to a lack of compliance to government rules. **Economy v. Health** Throughout the pandemic a false dichotomous argument pitting public health and economic success has emerged\textsuperscript{72}. In fact, one common argument against stringent public health measures like lockdown was the potential damage it inflicted on the national economy. It is incorrect that a loss of economic growth and job losses are a primary consequence of social-distancing measures rather than the virus itself\textsuperscript{72}.” Not taking strict public health measures to prevent the national economy during the pandemic is a short-sighted policy; in the long run, a brief closure and temporal subsidisation has proven to be more cost-beneficial than opening the economy during the pandemic. Although, New Zealand experienced an annual contraction in real gross domestic product (GDP) of 6.1%, this is much lower than other comparable countries and in Taiwan a net GDP of 0% was sustained\textsuperscript{73}. Furthermore, economists argue that the estimated economic cost of the pandemic in the United States is 16 trillion USD\textsuperscript{74}. Effective public health measures, if implemented, can reduce these financial costs significantly. Contrary to the false—yet common—dichotomy, protecting the health of the people is equivalent to protect the wealth of the people. Similar analyses have shown that this was also the case in the 1918 influenza pandemic\textsuperscript{75}. **Conclusion:** Looking ahead to year two of the pandemic, our collective progress will be dependent on a coordinated global effort to leave no one behind. Although the mass vaccination roll-out will dominate COVID-19 policy this year, the emergence of new SARS-CoV-2 variants that may escape the body’s neutralizing antibody response and continued inequitable access to vaccines indicates that the COVID-19 pandemic will continue. This may well turn out to be the year of variants and vaccines. However, now we are armed with knowing what works, what doesn’t and the range of interventions needed to keep numbers low. Let’s fix our fragmented global health system and follow the elimination playbook together: because if we’ve learned anything this past year, it’s that globally, we are only as strong as our weakest link. **References:** 1. Coronavirus: Two cases confirmed in UK. *BBC News* (2020). 2. Official UK Coronavirus Dashboard. *GOV.UK* https://coronavirus.data.gov.uk/details/deaths. 3. WHO Coronavirus Disease (COVID-19) Dashboard. *World Health Organization* https://covid19.who.int. 4. Baker, M. G., Wilson, N. & Blakely, T. Elimination could be the optimal response strategy for covid-19 and other emerging pandemic diseases. *BMJ* 371, m4907 (2020). 5. Lu, G. \textit{et al.} COVID-19 in Germany and China: mitigation versus elimination strategy. *Glob. Health Action* 14, 1875601 (2021). 6. UK Data on Hospital Capacity and Occupancy. *Office for Statistics Regulation* https://osr.statisticsauthority.gov.uk/news/uk-data-on-hospital-capacity-and-occupancy/ (2020). 7. Department of Health & Human Services. COVID-19 Reported Patient Impact and Hospital Capacity by Facility. *HealthData.gov* https://healthdata.gov/dataset/covid-19-reported-patient-impact-and-hospital-capacity-facility (2021). 8. Independent Panel for Pandemic Preparedness and Response. Second Report on Progress. (2021). 9. Legge, D. G. COVID-19 response exposes deep flaws in global health governance. *Glob. Soc. Policy* 20, 383–387 (2020). 10. Kupferschmidt, K. A divisive disease. *Science* 370, 1395–1397 (2020). 11. Ma, J. *et al.* COVID-19 patients in earlier stages exhaled millions of SARS-CoV-2 per hour. *Clin. Infect. Dis. Off. Publ. Infect. Dis. Soc. Am.* (2020) doi:10.1093/cid/ciaa1283. 12. How to Protect Yourself & Others. *Centers for Disease Control and Prevention* https://www.cdc.gov/coronavirus/2019-ncov/prevent-getting-sick/prevention.html (2020). 13. Transmission of SARS-CoV-2: implications for infection prevention precautions. *World Health Organization* https://www.who.int/news-room/commentaries/detail/transmission-of-sars-cov-2-implications-for-infection-prevention-precautions (2020). 14. Williamson, E. J. *et al.* Factors associated with COVID-19-related death using OpenSAFELY. *Nature* 584, 430–436 (2020). 15. Berlin, D. A., Gulick, R. M. & Martinez, F. J. Severe Covid-19. *N. Engl. J. Med.* 383, 2451–2460 (2020). 16. Genomewide Association Study of Severe Covid-19 with Respiratory Failure - PubMed. https://pubmed.ncbi.nlm.nih.gov/32558485/. 17. CDC. Long-Term Effects of COVID-19 | CDC. *Centers for Disease Control and Prevention* https://www.cdc.gov/coronavirus/2019-ncov/long-term-effects.html (2020). 18. Feldstein, L. R. *et al.* Multisystem Inflammatory Syndrome in U.S. Children and Adolescents. *N. Engl. J. Med.* 383, 334–346 (2020). 19. Dan, J. M. *et al.* Immunological memory to SARS-CoV-2 assessed for up to 8 months after infection. *Science* (2021) doi:10.1126/science.abf4063. 20. Callaway, E. Could new COVID variants undermine vaccines? Labs scramble to find out. *Nature* 589, 177–178 (2021). 21. Joseph, A. Scientists monitor a coronavirus mutation that could affect vaccine strength. *STAT* https://www.statnews.com/2021/01/07/coronavirus-mutation-vaccine-strength/ (2021). 22. Kayo, T. Global Solidarity is Necessary to End the COVID-19 Pandemic. *Asia-Pac. Rev.* 27, 46–56 (2020). 23. Kuznetsova, L. COVID-19: The World Community Expects the World Health Organization to Play a Stronger Leadership and Coordination Role in Pandemics Control. *Front. Public Health* 8, (2020). 24. Center for Global Health Science & Security, Georgetown University. Governance Preparedness: Initial Lessons from COVID-19. (2020). 25. Richardson, J., Wildman, J. & Robertson, I. K. A critique of the World Health Organisation’s evaluation of health system performance. *Health Econ.* 12, 355–366 (2003). 26. Gilsinan, K. How China Deceived the WHO. *The Atlantic* https://www.theatlantic.com/politics/archive/2020/04/world-health-organization-blame-pandemic-coronavirus/609820/ (2020). 27. World Health Organization. Constitution of the World Health Organization. (1946). 28. The Access to COVID-19 Tools (ACT) Accelerator. *World Health Organization* https://www.who.int/initiatives/act-accelerator. 29. World Health Organization. Listings of WHO’s response to COVID-19. *World Health Organization* https://www.who.int/news/item/29-06-2020-covidtimeline (2020). 30. COVAX Announces Additional Deals To Access Promising COVID-19 Vaccine Candidates; Plans Global Rollout Starting Q1 2021. *World Health Organization* 31. Africa’s long wait for the Covid-19 vaccine. *BBC News* (2021). 32. OXFAM International. Campaigners warn that 9 out of 10 people in poor countries are set to miss out on COVID-19 vaccine next year | Oxfam International. *OXFAM International* https://www.oxfam.org/en/press-releases/campaigners-warn-9-out-10-people-poor-countries-are-set-miss-out-covid-19-vaccine (2020). 33. Safi, M. WHO platform for pharmaceutical firms unused since pandemic began. *the Guardian* http://www.theguardian.com/world/2021/jan/22/who-platform-for-pharmaceutical-firms-unused-since-pandemic-began (2021). 34. Our World in Data. Coronavirus (COVID-19) Vaccinations - Statistics and Research - Our World in Data. *Our World in Data* https://ourworldindata.org/covid-vaccinations. 35. World Health Organization. *International Health Regulations*. (World Health Organization, 2008). 36. Gostin, L. O., Sridhar, D. & Hougendobler, D. The normative authority of the World Health Organization. *Public Health* 129, 854–863 (2015). 37. Security Council Underlines Support for Secretary-General’s Global Ceasefire Appeal, Fight against COVID-19, Unanimously Adopting Resolution 2532 (2020) | Meetings Coverage and Press Releases. *United Nations* https://www.un.org/press/en/2020/sc14238.doc.htm. 38. World Health Organization. Report of the WHO-China Joint Mission on Coronavirus Disease 2019 (COVID-19). (2020). 39. Press briefings. *World Health Organization* https://www.who.int/emergencies/diseases/novel-coronavirus-2019/media-resources/press-briefings. 40. Remarks by President Trump on Actions Against China, May 29, 2020 | US-China Institute. USC US-China Institute https://china.usc.edu/remarks-president-trump-actions-against-china-may-29-2020. 41. Wenham, C. What we have learnt about the World Health Organization from the Ebola outbreak. *Philos. Trans. R. Soc. B Biol. Sci.* 372, 20160307 (2017). 42. Scally, G., Jacobson, B. & Abbasi, K. The UK’s public health response to covid-19. *BMJ* 369, m1932 (2020). 43. CDC. Scientific Brief: SARS-CoV-2 and Potential Airborne Transmission | CDC. Centers for Disease Control and Prevention https://www.cdc.gov/coronavirus/2019-ncov/more/scientific-brief-sars-cov-2.html (2020). 44. Universal Masking to Prevent SARS-CoV-2 Transmission-The Time Is Now - PubMed. https://pubmed.ncbi.nlm.nih.gov/32663243/. 45. Sky News. ‘Be fast, have no regrets.’ Dr Michael J Ryan says ‘the greatest error is not to move’ and ‘speed trumps perfection’ when it comes to dealing with an outbreak such as #coronavirus. Get the latest on COVID-19 □ https://t.co/HMPNwaVk37 https://t.co/wDa7XOMw8Q. @SkyNews https://twitter.com/SkyNews/status/1238504143104421888 (2020). 46. Joint External Evaluation (JEE) mission reports. *World Health Organization* http://www.who.int/ihr/procedures/mission-reports/en/. 47. Who pays for cooperation in global health? A comparative analysis of WHO, the World Bank, the Global Fund to Fight HIV/AIDS, Tuberculosis and Malaria, and Gavi, the Vaccine Alliance - PubMed. https://pubmed.ncbi.nlm.nih.gov/28139255/. 48. Gostin, L. O. COVID-19 Reveals Urgent Need to Strengthen the World Health Organization. *JAMA* 323, 2361–2362 (2020). 49. Coronavirus Government Response Tracker | Blavatnik School of Government. University of Oxford Blavatnik School of Government https://www.bsg.ox.ac.uk/research/research-projects/coronavirus-government-response-tracker. 50. Lessons learnt from easing COVID-19 restrictions: an analysis of countries and regions in Asia Pacific and Europe - PubMed. https://pubmed.ncbi.nlm.nih.gov/32979936/. 51. Sridhar, D. COVID-19: what health experts could and could not predict. Nat. Med. 26, 1812–1812 (2020). 52. Coronavirus: Action Plan. A Guide to What You Can Expect Across the UK. (2020). 53. Horton, R. Offline: COVID-19—a reckoning. The Lancet 395, 935 (2020). 54. Kaminska, I. Making sense of nonsensical Covid-19 strategy. Financial Times https://www.ft.com/content/662a0033-61eb-4b1f-b95c-855a9ef8061f (2020). 55. Hale, T. et al. Variation in the Response to COVID-19 across the Four Nations of the United Kingdom. (2020). 56. DW News. Jacinda Ardern: Flattening curve wasn’t enough for New Zealand. 57. Greenhalgh, T., Knight, M., A’Court, C., Buxton, M. & Husain, L. Management of post-acute covid-19 in primary care. BMJ 370, m3026 (2020). 58. Fauci, A. S., Lane, H. C. & Redfield, R. R. Covid-19 - Navigating the Uncharted. N. Engl. J. Med. 382, 1268–1269 (2020). 59. Lehmann, C. Many Metrics to Measure COVID-19, Which Are Best? WebMD https://www.webmd.com/lung/news/20200922/many-metrics-to-measure-covid-19-which-are-best (2020). 60. Covid-19 vaccines - How fast can vaccination against covid-19 make a difference? | Science & technology | The Economist. The Economist https://www.economist.com/ezp-prod1.hul.harvard.edu/science-and-technology/2021/01/23/how-fast-can-vaccination-against-covid-19-make-a-difference (2021). 61. Vaccination Need Ratio in Israel Cities. *COVID-19 Maps* https://vaccinations.covid19maps.org/ (2021). 62. Trout, L. J. & Kleinman, A. Covid-19 Requires a Social Medicine Response. *Front. Sociol.* 5, (2020). 63. Albeck-Ripka, L. Hong Kong seeks to encourage testing with cash payments. *The New York Times* (2020). 64. Government launches new portal for care homes to arrange coronavirus testing - GOV.UK. *GOV.UK* https://www.gov.uk/government/news/government-launches-new-portal-for-care-homes-to-arrange-coronavirus-testing (2020). 65. Yamaguchi, M. Japan’s state of emergency is no lockdown. What’s in it? *AP NEWS* https://apnews.com/article/eb73f1170268ec2cdcf03e697365acb2 (2020). 66. Pavlik, M. A Great and Sudden Change: The Global Political Violence Landscape Before and After the COVID-19 Pandemic | ACLED. *ACLED* https://acleddata.com/2020/08/04/a-great-and-sudden-change-the-global-political-violence-landscape-before-and-after-the-covid-19-pandemic/ (2020). 67. Tworek, H., Beacock, I. & Ojo, E. Democratic Health Communications during Covid-19: A RAPID Response | Centre for the Study of Democratic Institutions, *University of British Columbia* https://democracy.arts.ubc.ca/2020/09/14/covid-19/. 68. Jefferies, S. *et al.* COVID-19 in New Zealand and the impact of the national response: a descriptive epidemiological study. *Lancet Public Health* 5, e612–e623 (2020). 69. Editors. Dying in a Leadership Vacuum. *N. Engl. J. Med.* 383, 1479–1480 (2020). 70. Paton, C. World-beating? Testing Britain’s Covid response and tracing the explanation. *Health Econ. Policy Law* 1–8 doi:10.1017/S174413312000033X. 71. Priti Patel blames minority of Covid-19 rule breakers for putting ‘health of nation at risk’. *MSN* https://www.msn.com/en-gb/news/world/priti-patel-blames-minority-of-covid-19-rule-breakers-for-putting-health-of-nation-at-risk/ar-BB1cHaOB. 72. Summers, L. H. Opinion | Trump is missing the big picture on the economy. *Washington Post*. 73. International Monetary Fund. *World Economic Outlook: A Long and Difficult Ascent*. (International Monetary Fund, 2020). 74. Cutler, D. M. & Summers, L. H. The COVID-19 Pandemic and the $16 Trillion Virus. *JAMA* 324, 1495–1496 (2020). 75. Correia, S., Luck, S. & Verner, E. *Pandemics Depress the Economy, Public Health Interventions Do Not: Evidence from the 1918 Flu*. https://papers.ssrn.com/abstract=3561560 (2020) doi:10.2139/ssrn.3561560. Conflict of Interest Disclosure: No CoI. Authors contribution: DS and IH conceptualized the piece. DS, IH and MM drafted the first version of the manuscript. LK and GF commented on the draft and inserted edits. All authors agreed the final version.
"The hand of the Lord was with him" Reading 1: Isaiah 49:1-6 Responsorial Psalm: Psalm 138:1-3, 13-15 Reading 2: Acts 13:22-26 Gospel reading: Luke 1:57-66, 80 Meditation: Birthdays are a special time to remember and give thanks for the blessings that have come our way. In many churches of the East and West the birth of John the Baptist is remembered on this day. his child was destined by God for an important mission. The last verses in the last book of the Old Testament, taken from the prophet Malachi, speak of the Lord's messenger, the prophet Elijah who will return to "turn the hearts of fathers to their children and the hearts of children to their fathers" (Malachi 4:6). **Birth and mission of John the Baptist:** We see the beginning of the fulfilment of this word when the Angel Gabriel announced to Zechariah the marvellous birth and mission of John the Baptist (Luke 1:17). In the birth of John and in the birth of Jesus the Messiah we see the grace of God breaking forth into a world broken by sin and without hope. John's miraculous birth shows the mercy and favour of God in preparing his people for the coming of their Lord and Saviour, Jesus Christ. John the Baptist's life was fuelled by one burning passion - to point others to Jesus Christ and to the coming of God's kingdom. Scripture tells us that John was filled with the Holy Spirit even from his mother's womb (Luke 1:15, 41) by Christ himself, whom Mary had just conceived by the Holy Spirit. When Mary visited her cousin Elizabeth, John leapt in the womb of Elizabeth as they were filled with the Holy Spirit (Luke 1:41). The fire of the Spirit dwelt in John and made him the forerunner of the coming Messiah. John was led by the Spirit into the wilderness prior to his ministry where he was tested and grew in the word of God. John's clothing was reminiscent of the prophet Elijah (see Kings 1:8). Among a people unconcerned with the things of God, it was his work to awaken their interest, unsettle them from their complacency, and arouse in them enough good will to recognize and receive Christ when he came. **God's gracious gift to us:** When God acts to save us he graciously fills us with his Holy Spirit and makes our faith come "alive" to his promises. Like John the Baptist, the Lord invites each of us to make our life a free-will offering to God. God wants to fill us with his glory all the days of our lives, from birth through death. The elderly Elizabeth gave birth to the last of the prophets, and Mary, a young girl, to the Lord of the angels. The daughter of Aaron gave birth to the voice in the desert (Isaiah 63:9), but the daughter of David to the strong God of the earth. The barren one gave birth to him who remits sins, but the Virgin gave birth to him who takes them away (John 1:29). Elizabeth gave birth to him who reconciled people through repentance, but Mary gave birth to him who purified the lands of uncleanness. The elder one lit a lamp in the house of Jacob, his father, for this lamp itself was John (John 5:35); while the younger one lit the Sun of Justice (Malachi 4:2) for all the nations. The angel announced to Zechariah, so that the slain one would proclaim the crucified one and that the hated one would proclaim the envied one. He who was to baptize with water would proclaim him who would baptize with fire and with the Holy Spirit (Matthew 3:11). The priest calling with the trumpet would proclaim concerning the one who is to come at the sound of the trumpet at the end. The voice would proclaim concerning the Word, and the one who saw the dove would proclaim concerning him upon whom the dove rested, like the lightning before the thunder." Questions for Community Faith Sharing: 1. Are you grateful for the ways that God has worked in your life, even from your birth? How have you shown him that you are grateful? 2. What is the mission in your life? How much of that mission is guided by and embedded within the teachings of our Church? 3. How can I be more like John the Baptist – point others to Jesus Christ and to eternal life in our God's Kingdom? "Lord Jesus, you bring hope and salvation to a world lost in sin, despair, and suffering. Let your grace refresh and restore your people today in the hope and joy of your great victory over sin and death."." Source: www.dailyscripture.net, author Don Schwager The Nativity of St John the Baptist 24 JUNE 2018 A PARTICIPATIVE CHRIST-CENTERED COMMUNITY OF DISCIPLES BUILDING THE KINGDOM OF GOD | CHURCH MAINTENANCE FUND | ST ANTHONY MEDICAL CLINIC | THE SAINT ANTHONY BREAD PROJECT | |-------------------------|---------------------------|---------------------------------| | There is a 2nd collection for the Church Maintenance Fund this week Sat 23 & Sun 24 June 2018. We thank you for your generous support! | ST ANTHONY MEDICAL CLINIC will be closed permanently. The last day of operation is on Mon 25 June 2018. We apologise for any inconvenience caused. | The collection of food items and the packing of hampers has completed. Distribution of hampers on Sun 24 June 2018 @ 1.30pm in St Basil Room. Everyone is invited to come and help! | PIETA is a support group for bereaved parents who seek God’s comfort, wisdom and hope through prayer and reflection on the WORD OF GOD. Our next monthly session on every 4th Tuesday of the month is on Tue 26 June 2018, 7:30pm at Agape Village, Toa Payoh Lorong 8 Contact us via email@example.com or https://facebook.com/PietaSingapore EMMANUEL CPG Wed 27 Jun 2018 at 8pm St Lucy room Celebrant: Rev Fr Andrew Wong Join us at our Charismatic session every Wednesday at 8pm, to praise God and seek the Holy Spirit's guidance. Enjoy some refreshments and fellowship after each session too. All are welcome! MARRIAGE ENCOUNTER facilitates life-changing weekends for married couples. By transforming the way husband and wife communicate, it forges greater intimacy and a closer relationship. Slots are still open for the next ME Weekend from Fri 6 July to Sun 8 July. E-mail firstname.lastname@example.org or visit: wwmesg.org HEART OF WORSHIP-MUSICAL EVE Jesus Youth gladly announces ‘Heart of Worship’ - a unique way of experiencing God through music that brings love and joy to your heart. The musical event starts at 6.30pm and extends until 9pm on Sat 7 July at Agape Village, Toa Payoh. Register for free @ www.singapore.jesusyouth.org by Sun 1 July or call Nobin Jose 90922091 Learning BreakThru Workshop 2018 Calling Parents of Children with Learning Disorders & Learning Difficulties Contact Persons: Dominic 97894182 Raphael 97881879 Support, Educate, Connect, Empower. Together, We Make A Difference 2 DAYS Sat 14 July (1230-1800 hr) & Sun 15 July (1130-1800 hr) $10 per pax / $15 per couple Venue: Church Auditorium LEARNING BREAK THROUGH WORKSHOP Family Life Team has come together to organize this training workshop for all parents with children who has Learning Challenges & Learning Disorders. The outline of the workshop is as listed: 1) Functionality of the Brain & Body Structure. 2) Neurological Disorders of how it affects the child’s thoughts, feelings, emotions & behaviours. 3) Practicum of simple therapies that equip parents with the skills in the treatment process to improve the condition. 4) Video trainings to give an idea of how simple treatments can be incorporated into our daily lifestyle. Parents who are keen to understand the underlying issues of their child’s learning struggles & behaviour issues are strongly encourage to attend. CHRISTIAN LIFE PROGRAM (CLP) by Couples for Christ (CFC) & Singles for Christ (SFC) ALL ARE WELCOME! (Male & Female) 21 years & above; married, single, divorced & separated single parent. Every Saturday Night starting from Sat 14 July, 7.00pm to 9.30pm @ Church of Saint Anthony (Light Dinner will be served) Contact Persons: Allan 90671979 / Chiqui 91890196 / Tyrone: 9757 3377 / Girlie: 9758 4857 Registration: http://tinyurl.com/CLPP2018CSADP So because you are LUKEWARM and neither hot nor cold, I will spit you out of My mouth. (Revelation 3:16) DIVINE MERCY~A DAY OF PRAYER & RECOLLECTION Sat 21 July 2018 from 9am to 5pm Registration 8:30am Theme: "Call to be Apostles of Mercy" @The Good Shepherd Place, 9 Lor 8 Toa Payoh, S(319253) Facilitator: Sr. Elizabeth Lim - RGS Celebrant for Mass: Fr Cyril John Lee (SD for CSA DM) CONTRIBUTION: S$10/= Contact Person: Joan Lee 96752276 (Seats are limited!) SOCIAL MISSION CONFERENCE 2018 21 July 2018 (Sat) 9:00am – 6:00pm Catholic Junior College 129 Whitley Road S297822 Register today at www.caritas-singapore.org/smc2018 DIVINE MERCY PRAYER GROUP Annual Trip to Kuala Lumpur and Penang (4 Days 3 Nights). From Fri 14 Sept to Mon 17 Sept Cost: S$320/= per pax (Twin Sharing) More information contact: Alice Nonis 86919770 or Joan Lee 96752276 Please hurry! Seats are limited! ADORACION DE JESUS MINISTRY We are looking for musicians (keyboard, flute, violin, oboe or cello), singers and new members to join the ministry for Children’s Adoration and Parish Holy Hour. For more information, please contact Amelia 93686168 or email email@example.com AV MINISTRY We are looking for parishioners 18 and above with a calling to serve the Audio Visual Ministry. Training will be provided. Call Joseph Lee 91873320 or Steven Paul 84794853 MARRIAGE ENCOUNTER (ME) WEEKEND It has been said that the greatest gift you can give your children is a good marriage. Polish up this gift and put a bow on it at the Worldwide Marriage Encounter (ME) Weekend. You deserve the best and your children need to see you reaching for it. Coming Weekends: 3-5 Aug, 7-9 Sep (full), 2-3 Nov and 7-9 Dec. Please approach ME couples who will be here next weekend, before/after mass to enrol you and to sell ME gift vouchers. You may sign up on http://wwwmesg.org MASS & HEALING SERVICE “My grace is sufficient for you….. .” (2 Cor 12:9) Saturday, 7 July 2018 at 7.00pm @ Church of St Anthony Auditorium. Led by Rev.Fr. A. Benjamin There will be Rosary, Praise & Worship, Mass and Adoration. All are welcome! There will be fellowship and potluck after the event. Come and receive His love and mercy. FRESHMEN ORIENTATION CAMPS Entering university this year? It's the season for the Freshmen Orientation Camps, and the Office for Young People (OYP) invites you to join in the university catholic FOCs or to get in touch with the university communities, to Kickstart your University Life with a solid foundation, Christ our Lord! We pray that you may be rooted in Him and built up on Him, held firm in faith, and overflowing with thanksgiving (cf. Col 2:7). We pray for a full and purposeful University Life for NTU 9-12 July ~ SIM 3-5 Aug NUS / Yale ~ NUS / SIT 17-19 Aug SMU 24-26 Aug Calling also JCU and SUTD students! To register or find out more, go to www.oyp.org.sg/UniFOC
Cortisol but not testosterone is repeatable and varies with reproductive effort in wild red deer stags Citation for published version: Pavitt, AT, Walling, CA, Möstl, E, Pemberton, JM & Kruuk, LEB 2015, 'Cortisol but not testosterone is repeatable and varies with reproductive effort in wild red deer stags', General And Comparative Endocrinology. https://doi.org/10.1016/j.ygcen.2015.07.009 Digital Object Identifier (DOI): 10.1016/j.ygcen.2015.07.009 Link: Link to publication record in Edinburgh Research Explorer Document Version: Publisher's PDF, also known as Version of record Published In: General And Comparative Endocrinology General rights Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact email@example.com providing details, and we will remove access to the work immediately and investigate your claim. Cortisol but not testosterone is repeatable and varies with reproductive effort in wild red deer stags Alyson T. Pavitt\textsuperscript{a,}\textsuperscript{*}, Craig A. Walling\textsuperscript{a}, Erich Möstl\textsuperscript{b}, Josephine M. Pemberton\textsuperscript{a}, Loeske E.B. Kruuk\textsuperscript{a,c} \textsuperscript{a} Institute of Evolutionary Biology, School of Biological Sciences, University of Edinburgh, Edinburgh EH9 3FL, UK \textsuperscript{b} Department of Biomedical Sciences, University of Veterinary Medicine, Veterinärplatz 1, A-1210 Vienna, Austria \textsuperscript{c} Division of Evolution, Ecology \& Genetics, Research School of Biology, The Australian National University, ACT 2601, Australia \textbf{Article history:} Received 16 November 2014 Revised 9 July 2015 Accepted 21 July 2015 Available online xxxx \textbf{Keywords:} Androgens Biological assay validation Dominance Faecal hormone metabolites Glucocorticoids Repeatability Rut Seasonal cycles \textbf{Abstract} Although it is known that hormone concentrations vary considerably between individuals within a population, how they change across time and how they relate to an individual's reproductive effort remains poorly quantified in wild animals. Using faecal samples collected from wild red deer stags, we examined sources of variation in faecal cortisol and androgen metabolites, and the potential relationship that these might have with an index of reproductive effort. We also biologically validated an assay for measuring androgen metabolites in red deer faeces. We show that variation in hormone concentrations between samples can be accounted for by the age of the individual and the season when the sample was collected. Faecal cortisol (but not androgen) metabolites also showed significant among-individual variation across the 10-year sampling time period, which accounted for 20% of the trait's phenotypic variance after correcting for the age and season effects. Finally, we show that an index of male reproductive effort (cumulative harem size) during the mating season (rut) was positively correlated with male cortisol concentrations, both among and within individuals. We suggest that the highest ranking males have the largest cumulative harem sizes (i.e. invest the greatest reproductive effort), and that this social dominance may have associated behaviours such as increased frequency of agonistic interactions which are associated with corresponding high levels of faecal cortisol metabolites (FCM). © 2015 Published by Elsevier Inc. \section*{1. Introduction} Although hormone concentrations vary between individuals within a population (Williams, 2008), how this relates to individual-level variation in fitness-related behaviour remains poorly quantified in the wild. To date, work in this area has been dominated by laboratory and captive populations (e.g. Bartos et al., 2010; Ketterson and Nolan, 1992), where variation in hormone levels and/or behaviours may not be representative of that seen in wild systems (Bartos et al., 2010). In this study we focused on variation in male behaviour during the mating season (rut) in a wild population of red deer (\textit{Cervus elaphus}), and tested for associations with faecal concentrations of androgen and glucocorticoid metabolite. Red deer stags exhibit dominance hierarchies throughout the year (Bartos et al., 2010; Lincoln et al., 1972), culminating in peak male–male agonism during the rut (Lincoln et al., 1972) when dominance status determines access to harems of females (Clutton-Brock et al., 1982). In this paper, we use data from a long-term study of a wild red deer population to test for associations between cumulative harem size (an index of reproductive effort which indicates access to females during the rut) and androgen and glucocorticoid levels respectively. Androgen concentrations do not remain consistent across individual males' lifetimes, but vary within and between years in association with behavioural changes (Book et al., 2001 and references therein, Lynch et al., 2002; Wingfield et al., 1990). Within a year, seasonal variation in testosterone concentrations often correlates with reproductive cycles and associated changes in male–male conflict (Lynch et al., 2002; Wingfield et al., 1990), peaking during the breeding season (September–November in our study population) when male aggression is at its height (e.g. Lincoln et al., 1972; Pereira et al., 2005). Where there is substantial age-related variation in reproductive effort, androgen concentrations might... also be expected to vary with age (Book et al., 2001 and references therein). Red deer stags show considerable variation in reproductive effort and output across their lifetime (Nussey et al., 2009). Stags in their reproductive prime tend to engage in more aggressive encounters than younger and older individuals (Clutton-Brock et al., 1979), and therefore might also be expected to exhibit higher testosterone concentrations overall (as has been shown in other deer species: Bubenik and Schamsa, 1986). Links between testosterone concentrations and male fitness-related traits are well established in several taxa (see reviews by Hau, 2007 and Wingfield et al., 2001) including red deer (Lincoln et al., 1972; Malo et al., 2009). Less is known, however, about the potential relationship between testosterone and behavioural investment in reproduction. Red deer stags exhibit dominance hierarchies throughout the year (Bartos et al., 2010; Lincoln et al., 1972), which determines their access to females during the rut (Clutton-Brock et al., 1982) and thus their chances of siring offspring conceived in that year. Given positive relationship between testosterone and social rank in this species (Bartos et al., 2010), testosterone levels might be expected to show a positive relationship with the size or length of time harems are held for (i.e. measures of reproductive effort), and through that, with a stag’s annual reproductive success (Appleby, 1982; Gibson and Guinness, 1980). Expectations for cortisol are somewhat more complex. Cortisol is the dominant circulating glucocorticoid in red deer (Ingram et al., 1999), and is generally (across taxa) highest when animals are exposed to unpredictable or uncontrollable stressors (Greenberg et al., 2002), although there is considerable individual variation in baseline levels. Where observed, circannual cycles in cortisol concentration are likely to reflect seasonal variation in stressors, such as challenging climatic conditions (e.g. low temperature: Huber et al., 2003a) or social instability (e.g. male conflict during the breeding season: Strier et al., 1999). Males investing greater effort in reproduction might also have higher levels of cortisol if that effort is associated with energetic or physiological costs (e.g. the Cort-Adaptation Hypothesis: Bonier et al., 2009), or if this is an adaptive response which enables them to maximise their fitness in unpredictable environments (Boonstra, 2013). Given that energetic investment in reproduction peaks during middle-age in red deer stags (Nussey et al., 2009), individuals might also be expected to have higher levels of cortisol during their reproductive prime. Evidence, however, suggests that this might be confounded by the physiological effects of ageing, which can see circulating cortisol levels increasing with age due to desensitisation of the cortisol feedback loop (Sapolsky, 1991; van Cauter et al., 1996). Similarly, it is also difficult to predict the association between cortisol levels and behavioural investment in reproduction. If the maintenance of social dominance (a trait closely associated with reproductive opportunity Clutton-Brock et al., 1982) involves greater aggression and energetic investment (Clutton-Brock et al., 1979; Lincoln et al., 1972), then males investing the most in reproduction might be expected to have higher cortisol levels as a result (e.g. Muller and Wrangham, 2004). This scenario would also predict positive associations between androgens and cortisol. The alternative hypothesis is that baseline glucocorticoid levels would be highest in animals with lower relative fitness (e.g. the Cort-Fitness Hypothesis: Bonier et al., 2009), due, for example, to poorer quality or suppressed reproductive systems (Liptrot, 1993). If high cortisol concentrations were linked to reduced quality and fitness, then individuals with high levels might also be expected to die at a younger age, leading to a population-level decline in cortisol concentrations amongst older age classes (van de Pol and Verhulst, 2006). In this study, we quantify (a) the effects of season and age on variation in faecal androgen and cortisol metabolite concentration; and (b) the relationships between concentrations of these hormones and cumulative harem size (an index of male reproductive effort) during the breeding season. For this, a large dataset at the individual-level is required, making the wild red deer on the Isle of Rum National Nature Reserve (NNR) in Scotland an ideal study population as life history, behaviour and reproductive data have been collected from individually identifiable deer since 1972 (Clutton-Brock et al., 1982). 2. Methods 2.1. Faecal sample collection Faecal samples were collected from individually identifiable wild red deer stags in the North Block study area of the Isle of Rum NNR, Scotland (see Clutton-Brock et al., 1982 for full description of the study population and site) between 2004 and 2013 (see Fig. S1 for the distribution of repeat sampling between individuals). Fresh faecal samples were collected both opportunistically and from targeted collection sessions within 5 min of witnessing defecation, and only from positively identified individuals. They were stored at −20 °C in a field freezer (mean time from collection to freezing: 101 min ± 10 SE), before being packed in ice and returned to laboratory freezers where they were kept at −20 °C until extraction. 2.2. Faecal steroid extraction Individual faecal samples were fully defrosted and homogenised to evenly distribute hormones throughout faeces. Once homogenised, 0.5 g of wet sample was extracted with 5 ml of methanol (90%), gently shaken (overnight at 20 °C) and centrifuged (20 min at 652 g), after which 1 ml of the resulting supernatant was transferred to a clean tube and stored at −20 °C until assay. Faecal samples (n = 194) were collected from 73 individuals who were either born in the study area (n = 53 males) or were visiting males born in other parts of the island (n = 20 males). 2.3. Faecal hormone immunoassays Concentrations of faecal androgen and cortisol metabolite (FAM and FCM respectively) were measured using group-specific enzyme immunoassays (EIAs). Both assays were carried out following the same established methods (Huber et al., 2003b; Palme and Mostl, 1994) with group-specific antibodies. Androgens are extensively metabolised before excretion, mainly in the liver. As the main testosterone metabolites are unknown in red deer, no immunoassay has previously been validated for FAM in this species. We therefore first biologically validated a suitable assay by testing the ability of three androgen assays (which measured androgen metabolites with a 17β hydroxy-group, a 17α-hydroxy-group, or a 17-oxo-group), to detect biologically meaningful differences (see Supplementary Information 2). This was biologically validated because, being a wild population, invasive procedures (e.g. chemical manipulations) were not possible (an approach outlined in Palme, 2005). Of the assays tested, the 17-oxo-androgen EIA both had the greatest reactivity, showing that most of the immune-reactive FAMs were excreted in this form, and best discriminated between sexes and male reproductive status in our study population (see Supplementary Information 2 for comparison of the assays tested). This test has previously been used successfully to measure FAM concentrations in other mammal species, including ungulates (Ganswindt et al., 2002; Hoby et al., 2006). Faecal cortisol metabolite (FCM) levels were measured using a group-specific 11-oxoetiocholanolone EIA which has previously been validated... in red deer using both ACTH (adrenocorticotropic hormone) challenge and natural disturbance tests (Huber et al., 2003b). These immunoassays followed previously published methodology (described in Huber et al., 2003b; Palme and Mostl, 1994), but with Protein A used for the first coating of the microtiter plates instead of affinity purified anti-rabbit IgG. Serial dilutions of 24 pooled samples showed high parallelism with the standard curve in both hormone groups \((p < 0.001)\), which had limits of detection (LOD) of 0.89 ng/g faeces for the FAM assay and 3.51 ng/g faeces for the FCM assay. The intra- and inter-assay coefficients of variance (CV) were calculated at 4.85% and 20.64% for FAM and 4.01% and 22.65% for the FCM assay. Several assay plates were run per day, and given that previous studies have found assay date to account for significant variation between samples (Pavitt et al., 2014), the mean within-day inter-assay CV was also calculated. This gave a mean within-day inter-assay CV of 12.46% (\(±1.80\) SE) for FAM and 15.02% (\(±3.72\) SE) for FCM. From the original 194 samples, 19 FAM and 16 FCM measures were removed due to low repeatability of concentrations measured between duplicates (CV > 10%). A further 34 FAM measures were removed because they fell below the LOD (removal of these 34 low FAM measures did not affect the results of the model, see Supplementary Information 3 for details). ### 2.4. Index of reproductive effort: cumulative harem size Red deer are a polygynous species in which males compete for harems of females during the breeding season (Clutton-Brock et al., 1982). In this study, cumulative harem size was used as a proxy for annual male reproductive effort, and measured as a stag’s total number of “hind-days held”. Thus cumulative harem size was defined as the sum of a male’s daily harem size across the rutting period (15th September–15th November) for a given year, based on daily censuses taken during this period. Censuses recorded male–female associations, and used proximity and behaviour to assign females to a stag’s harem. Stags holding harems outside of the North Block study area of the Isle of Rum NNR were not recorded. By combining a male’s harem size on a particular day and the number of days on which he held a harem, cumulative harem size is a good measure of total investment in reproduction in a given year. Previous analyses have shown this measure to be closely linked to both social rank and reproductive success in males (Appleby, 1982; Pemberton et al., 1992). These analyses used records of harem holding collected between 1971 and 2013, comprising of 2833 measures of cumulative harem size from 815 males (mean: 3.48 measures/stag \(±0.01\) SE). ### 3. Statistical analysis A multivariate (“multi-response”) mixed model was fitted to the data in ASReml-R ver.3.0.3 (package: asreml, Butler, 2009) to explore potential causes of variation in, and covariance between, FAM, FCM and cumulative harem size (CHS). All three measures were log-transformed to normalise residuals. This multivariate model therefore had three response variables, and the structure: \[ \text{FAM}, \text{FCM}, \text{CHS} \sim \text{trait-specific_fixed_effects} + (\text{individual ID}) + (\text{year}) + (\text{residuals}). \] The trait-specific fixed effects are discussed in Section 3.1 below, and the random effects (in parentheses) in Section 3.2. Although FAM and FCM concentrations were only available for a subset (2004–2013) of the individuals for whom we had measures of cumulative harem size, all individuals with observations of cumulative harem sizes (1971–2013) were included in the multivariate models, with missing values for FAM and FCM where necessary. Inclusion of these individuals improves the accuracy of estimation of the variance components associated with cumulative harem size. Further, improved information on the distribution of cumulative harem size will both improve the accuracy and reduce the uncertainty (SE) of estimation of any covariance between cumulative harem size and FAM or FCM. As outlined below, we estimated these covariances at both among-individual and within-individual (i.e. residual) individual levels. Our analyses, therefore, used a total of 141 FAM measures (from 66 stags), 178 FCM measures (from 67 stags) and 2833 measures of cumulative harem size (from 815 stags). Of these, 105 measures of cumulative harem size had corresponding FAM concentrations for a given stag in a given year, and 138 had corresponding FCM concentrations. A further 33 FAM and 43 FCM concentrations were also included for males which were either below rutting age (<4 years old; FAM: \(n = 23\) from 13 deer; FCM: \(n = 32\), from 12 deer), or did not hold a harem within the study area in the year of sample (FAM: \(n = 10\) from 5 deer; FCM: \(n = 11\) from 5 deer). Where males had repeat measures of hormone concentration in a given year, the sample collected closest to the start of their harem-holding period was associated with their cumulative harem size for that year. This allowed estimation of the residual covariance between cumulative harem size and hormone concentration. These models therefore included 71 FAM concentrations and 82 FCM concentrations that had a corresponding cumulative harem size, although all measures of hormone concentrations were included in the analyses (FAM: \(n = 141\), FCM: \(n = 178\)). #### 3.1. Fixed effects Age at the time of sampling (in years) was fitted as a fixed effect for all three response variables. A quadratic term for age was also tested because a number of male reproductive traits are known to have a quadratic relationship with age in this population (Nussey et al., 2009). This was retained in the model for FAM and cumulative harem size, but not for FCM, for which it was not significant \((p = 0.541)\). Sample month (11-level factor for January–November) and the age at final sampling were also included for both hormone concentrations. Age at final sampling was fitted to test for the ‘selective disappearance’ of particular hormone phenotypes with age, allowing us to distinguish between within-individual and population-level changes (van de Pol and Verhulst, 2006). The date of assay (7-level factor) was also included for both hormone concentrations as previous studies have found assay date to account for significant variation amongst samples, possibly due to fluctuations in laboratory temperature (Pavitt et al., 2014). Time of sample collection (all samples were collected between 09:15 and 21:10), and time (in minutes) from sample collection to freezing (mean time: 96 min \(±9\) SE, range: 2–391 min) were also tested for effects on FAM and FCM concentrations, as both have been shown to affect hormone concentrations (Ingram et al., 1999; Suttle et al., 1992). There was not, however, a significant effect of either collection time (FAM: Est. \(= 0.002 ± 0.022\) SE, \(p = 0.836\); FCM: Est. \(= 0.008 ± 0.010\) SE, \(p = 0.801\)) or time to freezing (FAM: Est. \(< 0.001 ± 0.002\) SE, \(p = 0.780\); FCM: Est. \(< 0.001 ± 0.001\) SE, \(p = 0.891\)), and so both were excluded from the final model. Fixed effects were tested for significance using incremental Wald tests, and the optimal model was accepted when all remaining fixed effects were significant at \(p < 0.05\). #### 3.2. Components of variance and covariance between hormone production and cumulative harem size Individual identity \((n = 815)\), year of sampling \((n = 42)\), and unexplained residual effects were fitted as random effects for all three traits in the model. After comparing nested models fitted... with and without year of sampling, this random effect was excluded from the final model because it did not significantly affect any of the three traits (FAM: $p = 0.945$, FCM: $p = 0.720$, cumulative harem size, $p = 0.492$; Table S5(b)). The repeatability of all three traits was estimated as the proportion of that trait’s overall phenotypic variance that was accounted for by individual identity (i.e. among-individual differences). After testing the variances associated with individual identity and residual effects, covariances between the respective random effects were also fitted to explore relationships between the three traits at both individual and residual levels. In order to test the significance of covariances, we used likelihood ratio tests (LRT) to compare the full model with models where each particular covariance was constrained to 0 in turn. The LRT assumed the difference in the likelihood of the two models was a chi-squared distribution with 1 degree of freedom. Because no individual-level variation was found in FAM concentrations when fitting a multivariate model with just variance components ($p = 0.664$, Table S5(a)), we did not attempt to estimate individual-level covariances between FAM and both FCM and cumulative harem size; these parameters were therefore fixed at 0 in the final model. This model was not a significantly worse fit to the data than a model in which these covariances were estimated (LRT: $X^2_{(2)} = 0.451; p = 0.637$), but is more statistically justified than estimating the covariance between two parameters when there is no robust statistical evidence of any significant variance in one of them. In the final model, therefore, the only testable (i.e. non-zero) among-individual covariance was between FCM and cumulative harem size. ### 4. Results Both faecal androgen and cortisol metabolite concentrations varied substantially between samples. Concentrations of FAM ranged from 2.7–17216.3 ng/g faeces (mean concentration: 447.0 ng/g faeces ± 154.9 SE), and FCM from 5.3–680.9 ng/g faeces (mean concentration: 61.5 ng/g faeces ± 6.3 SE). Measures of cumulative harem size also varied considerably, ranging from 1–646 hind-days held (mean: 56.2 hind-days held ±1.5 SE). #### 4.1. Seasonal and age effects Concentrations of both hormones showed significant variation with month ($p < 0.001$; Table 1; Fig. 1). FAM levels peaked in September, decreased through October and overall remained low for the rest of the year (Fig. 1). FCM concentrations also increased during the autumn period (with peak concentrations September–October), but showed an additional peak in February–March (Fig. 1). FAM, FCM and cumulative harem size also varied significantly with a stag’s age (FAM: $p < 0.001$; FCM: $p = 0.012$; cumulative harem size: $p < 0.001$; Table 1; Fig. 2 and Fig. s5), however age at final sampling did not significantly improve the model when considered for either hormone (FAM: $p = 0.777$; FCM: $p = 0.864$; Table 1). FAM concentrations increased with age until around 8–9 years old, after which they began to decline (Fig. 2(a)). In accordance with previous studies of this population (Nussey et al., 2009), cumulative harem size also peaked around 8–11 years old (Fig. s5). By contrast the relationship between FCM and age was linear, with older individuals having higher concentrations (Fig. 2(b)). In agreement with previous studies (Pavitt et al., 2014), both FAM and FCM varied with assay date. #### 4.2. Variance components FAM levels were not repeatable among individuals ($p = 0.719$; Table 2(a)), with differences between individuals only accounting for around 3% (0.03 ± 0.08 SE) of the variance observed in this trait. In contrast, both FCM and cumulative harem size varied significantly both at the among- and within-individual levels ($p < 0.005$, Table 2). FCM had a repeatability of 0.20 ± 0.06 SE (i.e. individual identity accounted for 20% of the variance seen in this trait after correcting for the fixed effects), and cumulative harem size had a repeatability estimate of 0.26 ± 0.03 SE. #### 4.3. Covariance between hormone levels and cumulative harem size Stags with greater cumulative harem sizes were also likely to have higher FCM concentrations (see Fig. 3 for overall phenotypic relationship between these two variables). This positive covariance between cumulative harem size and FCM was found both at the among-individual (LRT: $X^2_{(1)} = 3.067, p = 0.013$; Table 2(a)), and at within-individual or residual (LRT: $X^2_{(1)} = 1.876, p = 0.049$; Table 2(b)) levels. Given that FAM concentrations were not repeatable amongst individuals (Table 2(a); Table S5(a)), we did not attempt to estimate any among-individual covariance between FAM and either FCM or cumulative harem size (Table 2). There were non-significant negative covariances within individuals (i.e. residual covariance) between FAM, and both FCM (LRT: $X^2_{(1)} = 0.006, p = 0.910$; Table 2(b)) and cumulative harem size (LRT: $X^2_{(1)} = 1.139, p = 0.131$; Table 2(b)). ### 5. Discussion This study utilised non-invasive sampling techniques to explore the factors associated with among- and within-individual variation in faecal concentrations of both androgen and cortisol metabolites in a wild population of male red deer. We found clear seasonal and age-related variation in faecal concentrations of both hormones, as well as a significant positive relationship between a stag’s FCM levels and their cumulative harem size at both the among- and within-individual level. The analysis is amongst the first to test assumptions about the relationships between FAM and FCM concentrations and an index of reproductive effort in the wild. In accordance with expectations, FAM levels were highest in the build-up to and during the reproductive season (August–October), and in prime-aged stags (aged 8–9 years old). Testosterone is known to regulate the expression of both reproductive and aggressive behaviours in red deer (Fletcher, 1978; Lincoln et al., 1972). Rutting behaviour, for example, can be eliminated by castrating a red deer stag, and restored through testosterone implants (Lincoln et al., 1972). It was therefore not surprising to observe maximum FAM levels during the rut when inter-male aggression is greatest, and at the age when male annual reproductive performance, and thus presumably agonistic interactions between competing males, peaks (Nussey et al., 2009). We found no evidence of among-individual variance in FAM concentrations in this study. This contrasts with the limited results published for other taxonomic groups, which show significant repeatability of both plasma testosterone (lizards: While et al., 2010) and faecal androgen metabolites (Kralj-Fisher et al., 2007; Pelletier et al., 2003) in wild systems. It is worth noting, however, that these studies were either considered repeatability within the shorter time-periods of days (Pelletier et al., 2003) or months (Kralj-Fisher et al., 2007; While et al., 2010) or were based on much smaller sample sizes (Kralj-Fisher et al., 2007; While et al., 2010), than our study which collected samples over several years. The lack of any among-individual variance in FAM concentrations meant we did not examine covariances with cumulative harem size at the level of the individual. Given that sample year also explained no variance (see Methods), this lack of among-individual FAM variance could not be attributed to annual... Table 1 Correlates of FAM, FCM and cumulative harem size. Multivariate mixed effects model estimating the main effects of extrinsic factors on individual-level variation in (a) faecal androgen metabolite (FAM) concentrations, (b) faecal cortisol metabolite (FCM) concentrations, and (c) cumulative harem size. See Table S6 for breakdown of assay date estimates. | Fixed effects | FAM (n = 141) | FCM (n = 178) | Cumulative harem size (n = 2833) | |---------------|--------------|--------------|---------------------------------| | | Est. | SE | p | Est. | SE | p | | (Intercept) | 4.888 | 0.923 | <0.001*** | 3.623 | 0.522 | <0.001** | | Age | 0.117 | 0.061 | 0.032* | 0.058 | 0.031 | 0.012* | | Age^2 | −0.047 | 0.01 | <0.001*** | – | – | – | | February | −0.251 | 0.656 | – | 0.371 | 0.311 | – | | March | −0.925 | 0.715 | – | 0.393 | 0.32 | – | | April | −0.386 | 0.653 | – | −0.051 | 0.31 | – | | May | −0.191 | 1.045 | – | 0.224 | 0.41 | – | | June | −1.121 | 1.366 | – | −0.179 | 0.664 | – | | July | −0.342 | 0.735 | <0.001*** | −0.088 | 0.339 | <0.001*** | | August | 0.312 | 0.532 | – | 0.787 | 0.25 | – | | September | 1.745 | 0.504 | – | 0.803 | 0.245 | – | | October | 1.054 | 0.338 | – | 0.801 | 0.252 | – | | November | −0.157 | 0.648 | – | 0.458 | 0.308 | – | | Age at final sample | −0.019 | 0.06 | 0.777 | −0.005 | 0.032 | 0.864 | | Assay date | 7 estimates | | <0.001*** | 7 estimates | | <0.001*** | *p < 0.05; **p < 0.01; ***p < 0.001. a Estimates for month are relative to estimates of January. b See Table S6 for complete breakdown of assay date estimates. Fig. 1. Seasonal cycles in FAM & FCM. Variation in log transformed faecal androgen metabolite (FAM; black) and faecal cortisol metabolite (FCM; grey) concentrations with month. Points represent monthly means ± standard errors (see Fig. S6 for seasonal variation in the fitted values after correcting for age, assay date and individual identity in univariate hormone models). Numbers represent monthly sample sizes for FAM and FCM respectively. Only one sample was collected in June and so no estimate of error is possible. Fig. 2. Age-related variation in FAM & FCM. Variation in log transformed (a) faecal androgen metabolite (FAM) and (b) faecal cortisol metabolite (FCM) with age. The figures show the raw data, and the smooth lines were fitted from regressions of log-transformed hormone concentrations against age. This study identified two peaks in FCM concentration across the year, coinciding with periods of high environmental or physiological stress: one during the late winter (peaking in March), and a second one during the early autumn. The first peak is similar to previous findings in captive red deer (Huber et al., 2003a): winter is known to be energetically challenging for the deer on Rum, with limited food availability and high mortality rates (Clutton-Brock et al., 1982). The second peak in FCM coincides with the rutting season, and could be the result of increased agonistic interactions between stags competing for females (see Romero and Butler, 2007 for discussion of the stress–response). Elevated cortisol levels during the reproductive season have been reported in males of other polygynous species (Lynch et al., 2002; Strier et al., 1999), although this has not previously been found when analysing seasonal variation in red deer (Huber et al., 2003a; Ingram et al., 1999). We have no explanation for this lack of consensus with other deer studies, except possibly that the previous work has focussed on captive deer which may not have been exposed to the same conditions, behaviours or social interactions as those in the wild. Table 2 Relationships between FAM, FCM and cumulative harem size. Multivariate mixed effects model estimating variances (diagonal), correlations (above diagonal) and covariances (below diagonal) for faecal androgen metabolites (FAM), faecal cortisol metabolites (FCM) and cumulative harem size (CHS) at (a) among-individual and (b) residual within-individual levels (SE in brackets). Shaded cells indicate values that were fixed and not allowed to vary. Statistically significant variances and covariances are in bold. | | FAM | FCM | CHS | |-------|--------------|--------------|--------------| | **(a) Among-individual** | | | | | FAM | 0.049 (0.154) | 0 | 0 | | | $X^2 = 0.065$ | | | | | $p = 0.719$ | | | | FCM | 0 | 0.184 (0.068) | 0.577 (0.182) | | | $X^2 = 4.245$ | | | | | $p = 0.004$ | | | | CHS | 0 | 0.208 (0.080) | 0.711 (0.062) | | | $X^2 = 3.067$ | | | | | $p = 0.013$ | | | | | FAM | FCM | CHS | |-------|--------------|--------------|--------------| | **(b) Within-individual** | | | | | FAM | 1.481 (0.230) | −0.011 (0.108) | −0.215 (0.135) | | | $X^2 = 256.044$ | | | | | $p < 0.001$ | | | | FCM | −0.008 (0.078) | 0.347 (0.048) | 0.268 (0.125) | | | $X^2 = 0.006$ | | | | | $p = 0.910$ | | | | CHS | −0.300 (0.191) | 0.192 (0.091) | 1.317 (0.041) | | | $X^2 = 1.139$ | | | | | $p = 0.131$ | | | Fig. 3. Relationship between FCM and cumulative harem size. The relationship between log-transformed faecal cortisol metabolite (FCM) concentrations and log-transformed cumulative harem size ($n = 135$ observations of 50 stags). Figure shows the raw data, with a fitted line from the regression of log-transformed FCM against log-transformed cumulative harem size. In concurrence with previous rat (see Sapolsky, 1991 for review) and human (van Cauter et al., 1996) studies, cortisol concentrations increased linearly with age in this population. Laboratory experiments in rats have shown that older individuals take longer to return to baseline levels after a stressor, leading to prolonged periods of cortisol hyper-secretion (Sapolsky et al., 1984, 1986). The age-related increase in FCM levels observed in our study appears to be a consequence of within-individual change (rather than change at the population level), as age at final sampling had no effect on FCM levels. This suggests that the observed age-related variation does not reflect the selective disappearance of particular hormone phenotypes with age (van de Pol and Verhulst, 2006). After accounting for age and sample month, FCM levels were also found to be repeatable, with among-individual differences accounting for around 20% of the total phenotypic variance of this trait after correcting for the fixed effects. Thus those with relatively high FCM concentrations at one sampling point also had relatively high FCM concentrations at other sampling points (and vice versa for low FCM males). This concurs with previous findings of among-individual variance in glucocorticoid metabolite concentrations across a period of several months in wild greylag geese (*Anser anser*) (Kralj-Fisher et al., 2007), although we show this variance to remain across several years. In this study, males investing the greatest effort in reproduction (in terms of greatest cumulative harem size) were also likely to be those with the highest baseline FCM concentrations at both the among- and within-individual levels. Stags with relatively high average FCM, therefore, also had relatively larger cumulative harem sizes, and, within individuals’ life times, years with relatively high FCM were associated with relatively high cumulative harem size. In red deer, resources such as reproduction (Clutton-Brock et al., 1982) and high quality food (Appleby, 1980; Lincoln et al., 1972) are monopolised by socially dominant stags. Stags compete throughout the year for access to these resources, with high ranking individuals involved in more agonistic and aggressive interactions as a result (Clutton-Brock et al., 1982; Lincoln et al., 1972). Indeed, experimental studies show that reducing aggression through castration causes males to drop in social rank (Lincoln et al., 1972). Research also suggests that high dominance is conserved across the year, with males who dominate in bachelor herds (i.e. male groups outside of the rut), maintaining their high rank in the subsequent rutting season (Clutton-Brock et al., 1982). Given the positive relationship between aggression and glucocorticoid levels observed in other systems (e.g. Muller and Wrangham, 2004), our results support the hypothesis that whilst social dominance enables a high investment in reproduction, it also has associated behaviours (such as agonistic interactions) which lead to corresponding high levels of FCM. This relationship can also be seen within individuals (Table 2(b)): stags had higher FCM levels in years when they invested more reproductive effort (i.e. had larger cumulative harem sizes) than in years when they invested less. Whilst we are unable to comment on the longer-term associations between cortisol and fitness beyond that of a single year, these results do not support the hypothesis that cortisol will negatively influence a stag’s reproductive effort within the year of sampling. 6. Conclusion In summary, both faecal androgen and cortisol metabolite (FAM and FCM) concentrations varied with age, and showed pronounced seasonal cycles, with both hormones peaking during the rutting season. Only FCM concentrations were repeatable among individuals; after correcting for age- and season-related variation, FAM concentrations showed no among-individual variance. Males investing more effort during the rut (i.e. greater cumulative harem size) had higher cortisol concentrations than those investing less effort. Given that stags with large cumulative harem sizes tend to be more dominant, this relationship with FCM may be the consequence of more aggressive encounters and effort invested in maintaining their dominance status. Importantly, these results also show that high baseline cortisol levels do not negatively affect a... stag’s reproductive effort, and thus opportunity, within the year of sample. Acknowledgments We thank Tim Clutton-Brock for his long-term contributions to the Rum red deer study and for useful comments and discussion. We also thank Scottish Natural Heritage (SNH) for permission to work on the Isle of Rum NNR, as well as the SNH staff on Rum and the Rum community for their support and assistance. We are indebted to the field assistants, volunteers and colleagues who have helped with data collection, particularly Kathi Foerster, Alison Morris, Sean Morris, Martyn Baker, Bruce Boatman and Fiona Guinness. We are also grateful to the Rum red deer group for useful discussion. This research was supported by a Natural Environmental Research Council (NERC) research grant to LK, JP and T. Clutton-Brock, a NERC PhD studentship to AP, a NERC post-doctoral research fellowship to CAW, and an Australian Research Council Future Fellowship to LK. Appendix A. Supplementary data Supplementary data associated with this article can be found, in the online version, at http://dx.doi.org/10.1016/j.jygcn.2015.07.009. References Appleby, M., 1980. Social dominance and food access in red deer stags. Behaviour 74, 284–309. Appleby, M., 1982. The consequences and causes of high social rank in red deer stags. Behaviour 80, 259–273. Bartos, L., Schams, D., Bubenik, G., Korhira, R., Tomanek, M., 2010. Relationship between rank and plasma testosterone and cortisol in red deer males (Cervus elaphus). Physiol. Behav. 101, 628–634. Bonier, F., Martin, P., Moore, I., Wingfield, J., 2009. Do baseline glucocorticoids predict fitness? Trends Ecol. Evol. 24, 634–642. Book, A., Starzyk, K., Quinsey, V., 2001. The relationship between testosterone and aggression: a meta-analysis. Aggression Violent Behav. 6, 579–599. Boonstra, R., 2013. Reality as the leading cause of stress: rethinking the impact of chronic stress in nature. Trends Ecol. Evol. 27, 11–23. Bubenik, G., Schams, D., 1986. Relationship of age to seasonal levels of LH, FSH, prolactin and testosterone in male, white-tailed deer. Comp. Biochem. Physiol. Part A: Physiol. 83, 179–183. Butler, D., 2009. Asreml: asreml() fits the linear mixed model. R package version 3.0 ed2009. Clutton-Brock, T., Albon, S., Gibson, R., 1979. The logical stag: adaptive aspects of fighting in red deer (Cervus elaphus L.). Anim. Behav. 27, 211–225. Clutton-Brock, T., Guinness, F., Albon, S., 1982. Red Deer: Behavior and Ecology of Two Sexes, The University of Chicago Press, Chicago. Fletcher, T., 1978. The induction of male sexual behavior in Red Deer (Cervus elaphus) by the administration of testosterone to hinds and estradiol-17β to stags. Hormones Behav. 11, 74–88. Ganswindt, A., Heistermann, M., Borragan, S., Hodges, J., 2002. Assessment of testicular endocrine function in captive African elephants by measurement of urinary and fecal androgens. Zoo Biol. 21, 27–36. Gibson, R., Guinness, F., 1980. Differential reproductive success in red deer stags. J. Anim. Ecol. 49, 199–208. Greenberg, N., Carr, J., Summers, C., 2002. Causes and consequences of stress. Integr. Comp. Biol. 42, 508–516. Hau, M., 2007. Evolution of male traits by testosterone: implications for the evolution of vertebrate life histories. BioEssays 29, 133–144. Hoby, S., Schwarzenberger, F., Doherr, M., Robert, N., Walzer, C., 2006. Steroid hormone related male biased parasitism in chamois, Rupicapra rupicapra rupicapra. Vet. Parasitol. 138, 337–348. Huber, S., Palme, R., Arnold, W., 2003a. Effects of season, sex, and sample collection on concentrations of fecal cortisol metabolites in red deer (Cervus elaphus). Gen. Comp. Endocrinol. 130, 48–54. Huber, S., Palme, R., Zenker, W., Mostl, E., 2003b. Non-invasive monitoring of the adrenocortical response in red deer. J. Wildlife Manage. 67, 258–266. Ingram, J., Crockford, J., Matthews, L., 1999. Ultradian, circadian and seasonal rhythms in cortisol secretion and adrenal responsiveness to ACTH and yarding in unrestrained red deer (Cervus elaphus) stags. J. Endocrinol. 162, 289–300. Ketterson, E., Nolan Jr, V., 1992. Hormones and life histories: an integrative approach. Am. Nat. 140, s33–s62. Kralj-Fisher, S., Scheiber, I., Blejec, A., Moesli, E., Kotrschal, K., 2007. Individualities in a flock of free roaming greylag geese: behavioral and physiological consistency over time and across situations. Horm. Behav. 51, 239–248. Lincoln, G., Guinness, F., Short, R., 1972. Way in which testosterone controls social and sexual-behavior of red deer stag (Cervus elaphus). Horm. Behav. 3, 375–396. Liptrot, R., 1993. Stress and reproduction in domestic animals. Annals of the New York Academy of Science. 697, 275–284. Lynch, J., Ziegler, T., Strier, K., 2002. Individual and seasonal variation in fecal testosterone and cortisol levels of wild male tufted capuchin monkeys, Cebus apella nigritus. Horm. Behav. 41, 275–287. Malo, A., Roldan, E., Garde, J., Soler, A., Vicente, J., Gortazar, C., et al., 2009. What does testosterone do for red deer males? Proc. Roy. Soc. B: Biol. Sci. 276, 971–980. Muller, M., Wrangham, R., 2004. Dominance, cortisol and stress in wild chimpanzees (Pan troglodytes schweinfurthii). Behav. Ecol. Sociobiol. 55, 332–340. Nussey, D., Kruuk, L., Morris, A., Clements, M., Pemberton, J., Clutton-Brock, T., 2009. Inter- and intrasexual variation in aging patterns across reproductive traits in a wild red deer population. Am. Nat. 174, 342–357. Palme, R., 2005. Measuring fecal steroid: guidelines for practical application. Ann. N. Y. Acad. Sci. 1046, 75–80. Palme, R., Mostl, E., 1994. Biotin-streptavidin enzyme immunoassay for the determination of oestrogens and androgens in boar faeces. In: Gorog, S. (Ed.), Advances in Steroid Analysis ’93. Akademiai Kiado, Budapest, pp. 111–117. Pavitt, A., Walling, C., Pemberton, J., Kruuk, L., 2014. Causes and consequences of variation in early life testosterone in a wild population of red deer. Funct. Ecol. 28, 1224–1234. Pelletier, F., Bauman, J., Festia-Bianchet, M., 2003. Fecal testosterone in bighorn sheep (Ovis canadensis): behavioural and endocrine correlates. Can. J. Zool. 81, 1678–1684. Pemberton, J., Albon, S., Guinness, F., Clutton-Brock, T., Dover, G., 1992. Behavioral estimates of male mating success tested by DNA fingerprinting in a polygynous mammal. Behav. Ecol. 3, 69–75. Pereira, R., Duarte, J., Negro, J., 2005. Seasonal changes in fecal testosterone concentrations and their relationship to the reproductive behavior, antler cycle and grouping patterns in free-ranging male Pampas deer (Ozotoceros bezoarticus bezoarticus). Theriogenology 63, 2113–2125. Romero, L., Butler, L., 2007. Endocrinology of stress. Int. J. Comp. Psychol. 20, 89–95. Sapolsky, R., 1991. Do glucocorticoid concentrations rise with age in the rat? Neurobiol. Aging 13, 171–174. Sapolsky, R., Krey, L., McEwen, B., 1984. Glucocorticoid-sensitive hippocampal neurons are involved in terminating the adrenocortical stress response. Proc. Nat. Acad. Sci. USA 81, 6174–6177. Sapolsky, R., Krey, L., McEwen, B., 1986. The neuroendocrinology of stress and aging: the corticosteroid cascade hypothesis. Endocr. Rev. 7, 284–301. Strier, K., Ziegler, T., Wittwer, D., 1999. Seasonal and social correlates of fecal testosterone and cortisol levels in wild male muriquis (Brachyteles arachnoides). Horm. Behav. 35, 125–134. Suttie, J., Fennessy, P., Corson, I., Veenvliet, B., Littlejohn, R., Lapwood, K., 1992. Seasonal pattern of luteinizing hormone and testosterone pulsatile secretion in young adult red deer stags (Cervus elaphus) and its association with the antler cycle. J. Reprod. Fertil. 95, 925–933. van Cauter, E., Leppriult, R., Kupfer, D., 1996. Effects of gender and age on the levels and circadian rhythmicity of plasma cortisol. J. Clin. Endocrinol. Metab. 81, 2436–2473. van de Pol, M., Verhulst, S., 2006. Age-dependent traits: a new statistical model to separate within and between-individual effects. Am. Nat. 167, 766–773. While, G., Isaksson, C., McEvoy, J., Sinn, D., Komdeur, J., Wapstra, E., et al., 2010. Repeatable intra-individual variation in plasma testosterone concentration and its sex-specific link to aggression in a social lizard. Horm. Behav. 58, 208–213. Williams, T., 2008. Individual variation in endocrine systems: moving beyond the ‘tyranny of the golden mean’. Philos. Trans. Roy. Soc. B: Biol. Sci. 363, 1687–1698. Wingfield, J., Hegner, R., Dufty, A., Ball, G., 1990. The “challenge hypothesis”: theoretical implications for patterns of testosterone secretion, mating systems and breeding strategies. Am. Nat. 136, 829–840. Wingfield J., Lynn, S., Soma, K., 2001. Avoiding the ‘costs’ of testosterone: Ecological bases of hormone–behavior interactions. Brain Behav. Evol. 57, 239–251.
Chairperson Mitchell called the meeting to order at 7:00 P.M. **Daniel O’Connor, 17 Courtenay Circle ~ Addition** **Present:** Bruce Steele, Contractor **SEQR:** Chairperson Mitchell stated that this is a Type II SEQR Action under SEQR § 617.5(c) #12 & 13. No further review required. **The Secretary read the legal notice that was published in the July 4, 2013 edition of the Brighton Pittsford Post:** “Please take notice that a public hearing will be held before the Village of Pittsford Zoning Board of Appeals at the Village Hall, 21 North Main Street, Pittsford, New York, on Monday, July 15, 2013 at 6:00 pm, to consider an application made by Daniel O’Connor, owner of property located at 17 Courtenay Circle, for an area variance to expand a nonconforming structure on a nonconforming lot in the R-1 Zone; said structure having a side setback of 15 feet where a side setback of 14 feet is required, pursuant to Village Code § 210-9C.” **Discussion:** The applicant is proposing construction of an attached two-car garage and conversion of the existing two-car garage into an entryway/hall and two rooms. The applicant stated that the two-car, side-loading garage will be converted to a two-car, front-loading garage. The proposal requires a variance from the Zoning Board for the southeast corner of the garage construction that will extend approximately one square foot beyond the 15-foot side setback requirement. Chairperson Mitchell stated that the proposal is for expansion of a nonconforming structure on a nonconforming lot. Board members noted that there are many houses in the surrounding area with smaller setbacks than the one being requested by the applicant. **Motion:** Chairperson Mitchell made a motion, seconded by Member Rubiano, to open the public hearing. **Vote:** Rubiano – yes; Mitchell – yes; Maxey – yes. **Motion carried.** **Motion:** Chairperson Mitchell made a motion, seconded by Member Rubiano, to close the public hearing, as there was no one wishing to speak for or against this application. Vote: Rubiano – yes; Mitchell – yes; Maxey – yes. **Motion carried.** **Motion:** Chairperson Mitchell made a motion, seconded by Member Rubiano, to approve the application for an area variance for 17 Courtenay Circle, as submitted. Vote: Rubiano – yes; Mitchell – yes; Maxey – yes. **Motion carried.** The decision was filed in the Office of the Village Clerk on July 22, 2013. **Findings of Fact:** 1. This is a pre-existing, non-conforming structure and lot. 2. The lot is wedge-shaped and the structure has a deep setback from the street due to the curve of the property at the road side. 3. The proposed 2-car garage is consistent with other residences in this neighborhood. 4. The requested setback variance is minimal. Some lots in this neighborhood are in the R-2 district which has a 10’ side setback. This lot is in the R-3 district which has a 15’ setback. 5. There are no undesirable changes that will be produced in the character of the neighborhood by approving this area variance. 6. The proposed structure will be compatible with other residences in the neighborhood. 7. The area variance will not have an adverse effect or impact on the physical or environmental conditions of the neighborhood or district. 8. The benefit sought cannot be achieved by some feasible method. ****** **Kevin Morgan, 7 Austin Park ~ Addition** **Present:** Kevin Morgan, Homeowner **SEQR:** Chairperson Mitchell stated that this is a Type II SEQR Action under SEQR § 617.5(c) #12 & 13. No further review required. The Secretary read the legal notice that was published in the July 11, 2013 edition of the Brighton Pittsford Post: “Please take notice that a public hearing will be held before the Village of Pittsford Zoning Board of Appeals at the Village Hall, 21 North Main Street, Pittsford, New York, on Monday, July 22, 2013 at 7:00 pm, to consider an application made by Kevin Morgan, owner of property located at 7 Austin Park, for an area variance to expand a nonconforming structure on a nonconforming lot.” **Discussion:** The applicant is proposing construction of a 6’ x 13’ two-story addition to the east side of the rear of the house located at 7 Austin Park. Chairperson Mitchell stated that the proposal is for expansion of a nonconforming structure on a nonconforming lot. Board members noted that the proposed addition does not adversely impact the lot coverage. **Motion:** Chairperson Mitchell made a motion, seconded by Member Rubiano, to open the public hearing. Vote: Rubiano – yes; Mitchell – yes; Maxey – yes. **Motion carried.** Molly Fien, 9 Austin Park, stated her concern with the height of the proposed addition. Mr. Turner explained that the Planning Board will not consider the issue of height unless the proposed addition is over 400 square feet, which this is not. **Motion:** Chairperson Mitchell made a motion, seconded by Member Rubiano, to close the public hearing, as there was no one else wishing to speak for or against this application. **Vote:** Rubiano – yes; Mitchell – yes; Maxey – yes. *Motion carried.* **Motion:** Chairperson Mitchell made a motion, seconded by Member Maxey, to approve the application for an area variance for 7 Austin Park, as submitted. **Vote:** Rubiano – yes; Mitchell – yes; Maxey – yes. *Motion carried.* The decision was filed in the Office of the Village Clerk on July 22, 2013. **Findings of Fact:** 1. This is a pre-existing, non-conforming structure and lot. 2. The proposed addition does not encroach any further into the side setback than the existing structure. 3. There are no undesirable changes that will be produced in the character of the neighborhood by approving this area variance. 4. The proposed structure will be compatible with other residences in the neighborhood. 5. The area variance will not have an adverse effect or impact on the physical or environmental conditions of the neighborhood or district. 6. The benefit sought cannot be achieved by another feasible method. ****** **Buffalo Bills, Inc., Sutherland High School, Temporary Zoning Permit** The Secretary read the legal notice that was published in the July 16, 2011 edition of the Brighton Pittsford Post: “Please take notice that a Public Hearing will be held before the Village of Pittsford Zoning Board of Appeals, on Monday July 22, 2013 at 7:00 pm at the Village Hall, 21 North Main Street, Pittsford, NY, to consider an application made by Buffalo Bills, Inc., for a temporary zoning permit to use the Sutherland High School parking lot for vehicle parking for attendees of the Buffalo Bills training camp during the 2013 season, which will be July 28 through August 21, 2013.” **SEQR:** Chairperson Mitchell stated that this is a Type II SEQR Action under SEQR § 617.5(c). **Discussion:** The documentation submitted by the applicant indicates that the Buffalo Bills are proposing to utilize the Sutherland High School parking lot in the same manner that was approved by the Zoning Board in 2011. The Buffalo Bills will provide shuttle buses to transport patrons to and from the satellite parking lots and training camp. They anticipate that the Sutherland High School parking lot will be serviced by eight to twelve shuttle buses, with each bus running at staggered times, about fifteen minutes apart. There will be at least one parking attendant at the Sutherland High School parking lot during each day of the lot’s use. In addition, during each night practice, there will be an additional parking attendant working to help with the expected increase in attendance. The parking attendants will help direct traffic and ensure that the parking process goes smoothly. The Bills also provide public toilets at the Sutherland High School parking lot, which are emptied and cleaned on a daily basis by a company hired by the Bills. The Building Inspector indicated that there were no problems or issues with this proposal in 2011. **Motion:** Chairperson Mitchell made a motion, seconded by Member Rubiano, to open the public hearing. **Vote:** Rubiano – yes; Mitchell – yes; Maxey – yes. *Motion carried.* **Motion:** Chairperson Mitchell made a motion, seconded by Member Rubiano, to close the public hearing, as there was no one wishing to speak for or against this application. **Vote:** Rubiano – yes; Mitchell – yes; Maxey – yes. *Motion carried.* **Motion:** Member Rubiano made a motion, seconded by Chairperson Mitchell, to approve the application for a temporary permit, with the following conditions: 1. The return route of the buses will follow Main Street to Jefferson Road to Sutherland Street. 2. The public toilets will be located on the westernmost portion of the parking lot, farthest from the street. 3. The applicant will make an effort to modify the website and printed materials to direct traffic away from the residential area of Sutherland Street. 4. The applicant will provide signage to direct traffic to exit onto Jefferson Road. 5. The applicant will instruct the parking attendants to direct patrons to exit on Jefferson Road. **Vote:** Rubiano – yes; Mitchell – yes; Maxey – yes. *Motion carried.* The decision was filed in the Office of the Village Clerk on July 22, 2013. ****** **Jack Sigrist, 87 South Main Street ~ Addition** **Present:** Jack Sigrist, Architect **SEQR:** Chairperson Mitchell stated that this is a Type II SEQR Action under SEQR § 617.5(c) #12 & 13. No further review required. *The Secretary read the legal notices that were published in the July 16, 2013 edition of the Brighton Pittsford Post:* “Please take notice that a public hearing will be held before the Village of Pittsford Zoning Board of Appeals at the Village Hall, 21 North Main Street, Pittsford, New York, on Monday, July 22, 2013 at 7:00 pm, to consider an application made by Jack Sigrist for property located at 87 South Main Street, for an area variance to expand a nonconforming structure on a nonconforming lot pursuant to Village Code § 210-15C.” “Please take notice that a public hearing will be held before the Village of Pittsford Planning Board at the Village Hall, 21 North Main Street, Pittsford, New York, on Monday, July 22, 2013 at 7:00 pm, to consider an application made by Jack Sigrist for property located at 87 South Main Street, for approval for the construction of an addition where the total floor area exceeds 400 square feet, pursuant to Village Code § 210-83B(16).” Discussion: The applicant stated that the proposal is for construction of a first-floor mudroom addition, an expansion of a master bath over the new mudroom, relocation of a covered porch, expansion of the existing garage, and an expansion of the family room. The proposed addition will be two stories high and cover the area of the existing deck. The application is requesting: (1) a variance for a pre-existing, non-conforming, front lot line; (2) a variance for a pre-existing, non-conforming, front yard setback; (3) a variance to further extend pre-existing, non-conforming, side yard setbacks on both sides of the residence; and (4) Planning Board approval for the construction of an addition to a residential unit where the total floor area exceeds 400 square feet. The applicant presented several examples of other houses in the surrounding neighborhood with similar setbacks. The Building Inspector determined that the existing driveway exceeds the Village Code maximum of 12% impervious coverage. Board members expressed concern with the proposal to increase the amount of impervious coverage. This portion of the application will remain open. Motion: Chairperson Mitchell made a motion, seconded by Member Rubiano, to open the public hearing. Vote: Rubiano – yes; Mitchell – yes; Maxey – yes. Motion carried. Joanne Minor, South Main Street, requested to view the proposed plan. Motion: Chairperson Mitchell made a motion, seconded by Member Rubiano, to close the public hearing, as there was no one else wishing to speak for or against this application. Vote: Rubiano – yes; Mitchell – yes; Maxey – yes. Motion carried. Motion: Chairperson Mitchell made a motion, seconded by Member Rubiano, to approve the application for area variances for construction of an addition, omitting approval the portion of the application for blacktop driveway coverage. Vote: Rubiano – yes; Mitchell – yes; Maxey – yes. Motion carried. The decision was filed in the Office of the Village Clerk on July 22, 2013. Motion: Chairperson Mitchell made a motion, seconded by Member Maxey, to grant Planning Board approval for construction of an addition where the total floor area exceeds 400 square feet, as submitted. Vote: Rubiano – yes; Mitchell – yes; Maxey – yes. Motion carried. The decision was filed in the Office of the Village Clerk on July 22, 2013. Findings of Fact: 1. This is a pre-existing non-conforming structure and lot. 2. The proposed additions do not further encroach upon the existing side setbacks. 3. There are no undesirable changes that will be produced in the character of the neighborhood by approving this area variance. 4. The proposed structure will be compatible with other residences in the neighborhood. 5. The area variance will not have an adverse effect or impact on the physical or environmental conditions of the neighborhood or district. 6. The benefit sought cannot be achieved by some feasible method. **Member Items:** **Trustee Report:** - Mr. Galli reported that Starbucks and The Village Bakery will be working on a plan to improve the parking situation in that area. - There are ongoing conversations regarding the landscaping at Chase Bank. - Tess and Carlos will not be renewing the lease at that location. **Adjournment:** There being no further business, Chairperson Mitchell adjourned the meeting at 8:45 pm. ________________________________________ Linda Habeeb, Recording Secretary
Strip seeding brings multiple benefits to busy family farm Changing from traditional crop establishment methods to strip seeding has brought major benefits for Hebbelthwaite Farms in Leicestershire. James Hebbelthwaite is a third-generation farmer in Leicestershire. His grandparents took on the tenancy at Bridge Farm, Elmesthorpe in 1945 and James’ parents purchased the farm in 1979. In 2002 James started a bed & breakfast pig rearing operation, rearing 5,000 pigs at a time for Cranswick. After seven years working for a contractor, James returned to the farm full-time in 2007. Currently the Hebbelthwaites grow 100-acres of KWS Barrel winter wheat, 80-acres of Belepi, a winter wheat which can be winter- or spring-sown, together with 65-acres of spring oats for human consumption – the combination of heavy soil and lots of manure helping to produce high specific weight grains. Pig numbers are being reduced to 3,000 because of time pressures and new rules which limit the amount of manure which can be spread on the land. Working with his father David and one full-time employee, James ditched the plough and power harrow five years ago in favour of Claydon’s Opti-Till System after visiting the arable farm of its inventor, Jeff Claydon, in West Suffolk. This holistic approach to establishing any type of seed that can be air-sown delivers high-yielding crops at low cost for maximum profitability. Instead of the drawn-out process of establishing crops conventionally, James and David Hebbelthwaite now do so much more quickly and efficiently using three pieces of Claydon equipment; a 7.5m Straw Harrow, 3m Hybrid mounted drill and 3m TerraBlade inter-row hoe. All of them, James emphasises, “do exactly what they say on the tin”. “The advantages of strip seeding become more evident every season,” James enthuses. “We have seen significant reductions in time, cost and labour, continuous improvements in soil structure, a substantial reduction in weeds, and higher, more reliable yields.” Efficient method of establishment “Our heavy land is very difficult to manage and with only 8in of topsoil before we hit clay, ploughing could throw up some very ‘livery’ soil which would then be very difficult to break down into a seedbed. The last time we used a plough was in 2015, when the land was still compacted after the very wet 2012 and 2013 seasons. “With just my father and me doing all the work, turning the soil over, then power-harrowing and drilling was very time-consuming and costly. We were also rearing 5,000 pigs from 7–35kg on a bed & breakfast basis, so there were lots of demands on our time and we weren’t exactly looking for things to do. It was apparent that we needed a more efficient way of establishing crops, but clearly a zero-till approach would have been a non-starter on this heavy land. “At one time, farmers were able to spray against almost any weed or pest issue but increasing legislation and a reducing pool of ag-chem products has made the job more difficult, so we also felt that there had to be a better, more sustainable alternative. “After seeing the Opti-Till System advertised, we looked around the Claydon farm and saw the benefits on some of the heaviest, most difficult to manage land in the country. Our independent agronomist was not too keen initially but having seen the advantages and excellent results has changed his mind, so much so that he now has several other clients who use this approach. “We ordered a new 3m Claydon Hybrid mounted drill and began seeing the benefits from using it in the first season, particularly on oilseed rape where yields were much higher than with our previous system. Like so many other farms in the UK we were subsequently affected by cabbage stem flea beetle and the last crop of oilseed rape we harvested was in 2018. We haven’t grown it since, but after a long break we plan to give OSR another go in 2022.” Versatile drill The Claydon Hybrid will sow direct into stubbles or cultivated soils, both min-tilled and ploughed, with or without fertiliser placement between or in the seeded rows. With a few simple, quick modifications, it can also be used for conventional sowing, low-disturbance establishment, and zero-till seeding. This makes it a much more versatile, cost-effective solution than purchasing a strip till drill and a specialist low-disturbance model. In standard form, the drill’s unique leading tine busts out compaction, aerates the soil and creates drainage/tillth in the seeding and rooting zones. The seeding tine which follows then creates more tillth and places the seed under it, at the chosen depth above the drainage channel. Adding to the benefits “A year after buying the Claydon Hybrid drill we added a 7.5m Claydon Straw Harrow, which has been amazing. Until you experience the benefits it would be very easy to dismiss it as not doing very much, but nothing could be further from the truth. It is a very effective and efficient piece of equipment.” A key part of establishing crops successfully is to achieve an optimum tillth as soon as possible after harvest so volunteers and weeds seeds can germinate. Even if no green shoots are visible on the surface, weeds and volunteers will be growing under the straw. The conventional min-till approach can be problematic because moving 100–125mm of soil will significantly slow germination or bury weed/volunteer seeds so deep that they do not germinate until after the crop emerges, creating major cost and control issues. Deeper cultivations also present a significant weather risk, as heavy rain will reduce the soil to a sticky mess with no structure or ability to support following machinery. The surface can also seal over and become anaerobic, creating issues. with water ‘ponding’ or run-off. In extreme cases, full cultivations may initially be necessary to put right the impact of min-till, in which case weed/volunteer seeds will be buried even more deeply, making control impossible and providing ideal conditions for slugs. The Claydon Straw Harrow distributes chopped straw evenly and creates a fine, level 2–3cm tilth, providing the high-humidity conditions necessary for weeds and volunteers to germinate rapidly. Straw harrowing also halts the soil’s natural capillary action, preventing water from being drawn up to the surface and the surface from drying out with the action of wind and sun to form a hard, impermeable layer. Using it when weeds and volunteers are less than 2cm tall will kill 70% of them, so repeating this several times will dramatically reduce their numbers and slug populations, often to the point where we need fewer chemicals and there is no need to apply slug pellets. This fast, low-cost operation is highly effective, James confirms. After crops have been harvested by the farm’s John Deere W540 combine and the contractor’s MF 2170 has packaged all the straw into 500kg bales, every acre is covered with the Straw Harrow, often multiple times to maximise the benefits. It flies over the fields, distributing any loose straw that the baler might have missed, killing slugs and their eggs, at the same time creating a fine 1–2cm tilth which encourages weed seeds and volunteers to germinate rapidly. “For spring crops, we just use the Straw Harrow in the autumn, spray off any residues and either do another pass with the Straw Harrow or shallow disc, no more than 2in deep, before drilling at the end of March or early April. The manure from the pig unit does wonders for the condition of the soils and crops but spreading it does cause compaction, so in those areas we tend to subsoil to 9in. “We run the 3m Claydon Hybrid behind a 235hp John Deere 6195, which provides plenty of power even on our heavy land. The front tines run 15cm deep for establishing OSR and the seed is drilled at 1cm using the standard Claydon A-Share. For cereals the leading tine runs at 10cm and seed goes in at 2–2.5cm. That works well and instead of leaving tramlines we use the John Deere’s GPS guidance when applying pre-emergence products at 24m, so all other operations follow the same wheelings. “After six years of using Claydon Opti-Till the soil structure has improved so much that it is unrecognisable compared with how it was before. Even where we have just used the Straw Harrow after harvest the soil is alive with worms and their casts are all over the surface, so there must be an unbelievable number in the soil profile. “The other major benefit is that it encourages plants to develop unbelievable rooting structures which we simply never saw with a conventional establishment system, so even very dry conditions are not a problem as roots penetrate deep into the soil. In 2020, we had no rain from April until June, yet the crops looked exceptional throughout. “The improved structure and weather resilience of the soil has made following operations such as spraying and fertiliser application much easier and more predictable, even after heavy rain. During the very wet 2019/2020 season that was a huge positive, as muck was spread early, autumn sown crops were drilled before the really heavy rain fell and subsequently developed their full potential.” A novel approach For the last few years, the Hebbelthwaites have drilled cereals at 250kg/ha – double the seed rate used just a few years ago, which has proven very effective in crowding out weeds, both those growing in and between the rows. This has been enhanced by the purchase of a 3m Claydon TerraBlade inter-row hoe; this low-cost, reliable piece of equipment being acquired in 2020 from dealer Sharmans’ Melton Mowbray branch. The TerraBlade is an extremely effective, low-cost, mechanical method of controlling weeds growing between the rows in band-sown crops. By keeping the unseeded rows clear of weeds during the early stages of crop growth, competition for nutrients, light, air, and water is reduced, enabling the young plants to grow strong and healthy, and helping to maximise crop yields. It eliminates weeds reliably, safely without using chemicals and clears up any that were missed by ag-chems, or where such products cannot be used, as in organic systems. This drastically lowers the potential for carry-over of weed seeds and the risk of more resistant types developing. Claydon’s TerraBlade range now includes five models from 3–8m wide, designed for use on a Cat II front linkage – allowing for effective manual steering of the hoe blades between seeded rows. “Although herbicides remain an essential part of modern agriculture, their cost is continually increasing, and they seem to be becoming less effective,” James states. “Our experience is that if we apply a herbicide in the spring it will take out 50–60% of target weeds, but not touch the rest, so that is why the TerraBlade is such an effective tool to have on the farm. “The only addition that we have made to the standard specification of our Claydon Hybrid has been to add two additional depth wheels, which have made the drill even more stable and further improved consistent seed placement. Our latest purchase is the Claydon Twin-Tine kit, which we will use for drilling spring crops. Overall, we have been delighted with the Claydon Opti-Till System.”
BELIEVE "Do you believe in the Son of Man?" JOHN 9:35B Mass Schedule SATURDAY VIGIL: 4:00 PM SUNDAY: 8:00, 9:30, 11:30 AM WEEKDAYS (Chapel): Monday - Friday, 7:30 AM HOLY DAYS: Vigil - 7:00 PM, 9:00 AM and 7:00 PM HOLIDAYS: 9:00 AM CONFESSiON: Sat. 3:00 - 3:45 PM or by appointment Business Office Hours Monday - Thursday: 8:30 AM to 4:30 PM Friday: 8:30 AM to 2:30 PM Saturday: 10:00 AM to 1:00 PM Sunday: 9:00 AM to 1:00 PM (Office closed weekdays from 12:00-1:00pm for lunch) Parish Staff Pastor: Rev. Robert B. McDermott (ext. 3) Permanent Deacon: Rev. Mr. Daniel Bingnear Weekend Assistant: Rev. Francis X. Devlin, O.S.A. Business Manager: Mr. Daniel Kinnik (ext. 4) Director of Religious Education & Music Ministry: Mrs. Kathleen Aaronson (ext. 5) PREP Secretary: Mrs. Patricia Donnelly (ext. 8) Parish Admin. Assistant: Mrs. Regina Robinson (ext. 6) Communications Secretary: Mrs. Renee Devine (ext. 2) Maintenance: Bill Casey (ext. 7) Pastor Emeritus Rev. John Sibel 4225 CHICHESTER AVENUE • BOOTHWYN, PA 19061 Website: stjohnfisherchurch.com Facebook: Saint John Fisher Phone: (610) 485-0441 E-mail: firstname.lastname@example.org TODAY IS SUNDAY, MARCH 22, THE FOURTH SUNDAY OF LENT. The Gospel is John 9:1-41. “Do you believe in the Son of Man?” Jesus wants to motivate each one of us to see the truth. After developing a relationship with Jesus, the blind man “sees” someone very special. The Pharisees, due to the blindness caused by their ignorance, and need for self-preservation, remain blind. Presuppositions, prejudices, assumptions, and our needs can easily blind us to truth. We see what we want or need to see and not what is really there. Our stubbornness continues to convince us that we are right and that our vision is perfect. Only God can complete the picture. Look around at our world. So much of what is happening today is due to the reluctance of folks to allow themselves to see what is really there. Many react to what life presents to them more with the lenses of ignorance than lenses of clarity. The Gospel carries great transformative value. With it, God corrects our vision and replaces our limited sight with the fullness of his sight. God opens our eyes so that we can see that it is not about preserving what we have created but of living in the immensity and wonder of God’s kingdom. Through a simple, loving relationship with God, we can break through the tethers of prejudice, eradicate fear, dispel the darkness of hatred and sin, discover freedom, live in peace, work for justice, be effective stewards of creation, assist the migrant and the immigrant, and safeguard our economic systems and policies so that they truly serve all of God’s children. The truth is much bigger than what our limited sight believes it to be. Do not be afraid. Be open and be humble enough to know that you need help. Allow God to work in and through you. ON LINE GIVING WITH WE SHARE Now would be a great time for you to consider making your weekly offering through the online giving platform WESHARE. It helps you in four ways. 1) You don’t have to remember to bring an envelope to church, mail it in, or drop it by the office. 2) You can manage your budget and control when your donation is made. 3) You can access a record of your contributions at any time. 4) You can be confident in the security of your financial information. It benefits St. John Fisher Parish, too. Your support is consistent throughout the year, allowing the parish to budget with confidence. Online transactions mean less time spent opening, counting, and depositing funds, which also lessens the chance of errors. Signing up is simple. - Go to www.stjohnfisherchurch.com. - Scroll to the very bottom of the home page and click the button for “Give Online with WESHARE.” (above) - A new, secure page will open. Scroll down to “Sunday Collection.” - Click the “Make a Donation” button. - You have a choice between “Recurring” and “One Time” donations…click the button you want. - A new page will open where you can fill in the amount of your offering and choose they way you wish to pay. As always, we are thankful for your continued support of St. John Fisher Parish. ST. JOHN FISHER PARISH FUNCTIONS DURING THE COVID 19 OUTBREAK This week, Archbishop Nelson J. Pérez suspended all public Masses in the Archdiocese of Philadelphia effective March 18th, until further notice. “As the Archbishop of Philadelphia, my first priority is to ensure the health and welfare of those entrusted to the pastoral and temporal care of our Church,” he said. “I want to be very clear that the Catholic Church in Philadelphia is not closing down. It is not disappearing and it will not abandon you. Time and again as our history has proven the Church has risen to meet great challenges and provide a beacon of hope and light.” Going forward, here is the plan for the activities of the parish: ♦ There will be no public Sunday or daily Masses at St. John Fisher Parish until the Archdiocese determines the need for the suspension has passed. Father McDermott will celebrate Mass privately to honor the requested Mass intentions of the day and for the spiritual good of the parish. ♦ The Church will be open for prayer on Sundays from 8am until noon. Please stay at home if you are ill. Please spread out while you are in Church. ♦ The Chapel will be open for prayer between 8:30am-noon; 1-4:30pm on weekdays (1:00pm on Friday). Please enter through the Parish Center. ♦ Funeral Masses will be held as needed, but there will be no viewings held in the Church, no eulogies at Church, and we will follow the CDC guidelines limiting no more than 50 people at a public event. Music will be provided. ♦ Baptisms can be performed if scheduled in advance, with only one family per ceremony. We will follow the CDC guidelines limiting no more than 50 people at a public event. ♦ The Parish Penance Service on March 23 has been suspended; confessions will be heard by appointment. ♦ Stations of the Cross services have been suspended for the rest of Lent. ♦ Holy Family Regional School and the PREP program are suspended until March 23; an extension may be necessary, so please to check the website. ♦ Holy Communion for the homebound and Eucharistic visits to the hospital are suspended. ♦ A Priest will always be available for a visit to administer the Last Rites and Viaticum to those in danger of death. ♦ Parish facilities cannot be used for social meetings or planning committees until further notice. ♦ The Parish Business Office will be open during the week from 8:30am until 4:30pm (1:00pm on Fridays); closed from noon -1:00 for lunch. The Office will be closed on weekends. The parish website, stjohnfisherchurch.com is the primary means of communication with you, and will be updated when necessary. The weekly bulletin can be found at the bottom of the home page. Thank you for your understanding! SAVE THE DATE! SJF’s ANNUAL FLEA MARKET- JUNE 13, 2020 Spring is finally here! As you do your spring cleaning, please set aside your treasures to donate to the Annual Flea Market. The gift basket raffle has been very popular and a tremendous success! This year, we are asking for donations of new items, gift cards, small cash donations, or a completed theme basket to contribute to the raffle. We have a talented young lady that will turn the donated items into a beautiful prize, and she is eager to begin this project! Please contact Juanita at 610-485-9344 to make arrangements to drop off donations for the baskets. The drop off dates for used items to sell at the flea market will be June 11 and 12. More details will be coming as we get closer to the date. Thank you in advance for your support. If you have any questions, please call Juanita or Susan (610-350-7196). ANNUNCIATION OF THE LORD: MARCH 25 This week, we celebrate of the announcement by the Archangel Gabriel to the Blessed Virgin Mary that she would conceive and become the mother of Jesus, the Jewish messiah and Son of God. Gabriel told Mary to name her son Jesus, meaning “the Lord is salvation”. The feast of the Annunciation, now recognized as a solemnity, was first celebrated in the fourth or fifth century. Its central focus is the Incarnation: God has become one of us. Mary has an important role to play in God’s plan. From all eternity, God destined her to be the mother of Jesus, thus closely related to him in the redemption of the world. Although confused and afraid, Mary trusted God and replied to the Angel Gabriel, “Behold, I am the handmaid of the Lord. May it be done to me according to your word.” CHANCE OF A LIFETIME (COAL) RAFFLE IS BACK! The Knights of Columbus Council 13710 is selling Chance of a Lifetime raffle tickets this year, beginning March 14. There are over 16 prizes, worth over $65,000, including a new car, Honda Scooters, vacations, and tech gear. Winners can choose the cash equivalency for any prize. This is a state-wide raffle, sold only in Pennsylvania, with the proceeds used for scholarships and grants to fund the Council’s charitable work. Chances are $1 each or a book of 8 for $5. Winners will be drawn on April 30. See any member of the Knights of Columbus to purchase a ticket! FINANCIAL OFFERING: STEWARDSHIP OF TREASURE MARCH 15 | Total | $ 6,176 | |-------|---------| | Number | 266 | Thank you for your faithful support! CONGRATULATIONS TO OUR CONFIRMANDI! On Thursday evening, March 12, the parish welcomed Most Reverend John McIntyre, who administered the Sacrament of Confirmation for our young parishioners. Congratulations to the following candidates: Sarah Rose Begley Jaiden Anthony Bieller Ashley Rose Celestino Justin Boris Comers Ryan Michael Comers Timothy Sebastian Comers Audrey Collette Dever Anthony Andrew Giribaldi Brian Charles Lewis Talisa Theresa Lorenzo Natalie Elizabeth Mignogna Brooke Maria Montgomery Domenic Joseph Nardini Ashlynn Elizabeth Roy Jamie Cecelia Rebarchak Aubrey Francis Schneider Maura Mary Smargiassi Nicholas Andrew Smyl Giovonna Elizabeth Warholic Rachel Elizabeth Weins Jacob James Wilkins Anthony Joseph Zagame The parish and the Confirmation class wish to extend their thanks to all who have helped prepare the candidates for this Sacrament, especially the teachers of our Catholic School and our catechists and aides of the PREP program. IMPORTANT NEWS ABOUT BASEBALL MANIA Major League Baseball took the unprecedented step to postpone the start of the regular season by at least two weeks because of the risk of spreading the Coronavirus. There may be a delayed start to the season, but Baseball Mania is guaranteeing a full 10 weeks of winners for tickets sold this spring. If the season starts in Week 3, your ticket starts with Week 3. When Week 10 is finished, winners will be calculated for the next two weeks using the blocks for Weeks 1 and 2. St. John Fisher Parish will continue selling tickets through at least June 7. Thank you for supporting this parish fundraiser! SEAFOOD TREATS DURING LENT Council 13710 of the Knights of Columbus will continue to take orders for Capt’n Chucky’s award winning seafood treats. The last week to order is March 30. Please contact Jack Fitzgerald directly at 267-254-2114 or 610-497-2370 to place an order, or drop it by the Parish Center Office before 10 a.m. on Mondays. ⇒ Jumbo Lump Crab Cakes, made fresh, 4/package for $23, or 8/package for $42. Can be frozen to use later. ⇒ Crab Fritters appetizers, 50 pieces/bag for $18 ⇒ Maryland Crab Soup, quart/$16. Mildly spicy, packed with crab meat, sold frozen. Orders for the week are placed at 10:45am on Monday, and must be picked up in the Parish Center Hall on Friday afternoon between 3:00-6:00pm. WEEKLY ACTIVITIES ADULT CHOIR PRACTICE: Wednesdays at 7:15pm in Church YOUTH CHOIR PRACTICE: Sundays at 10:35 – 11:10am in Church CONTEMPORARY CHOIR: Every other Tuesday evening at 7:00pm RESURRECTION CHOIR: Sings for funeral Masses. No rehearsals. MEN’S GATHERING: Mondays at 7:00pm; meetings are held in the Meeting Room in the Church. ADORATION OF THE BLESSED SACRAMENT: Mondays after the 7:30am Mass until noon in the Chapel. Benediction follows at noon. THE ROSARY AND DIVINE MERCY: Daily after the 7:30am Mass RITE OF CHRISTIAN INITIATION FOR ADULTS (R.C.I.A.): Wednesday at 7:00pm in the Parish Center. BULLETIN DEADLINE: Items due by noon on Tuesday. Email them to email@example.com MONTHLY ACTIVITIES PARISH COUNCIL: Meets on the first Thursday of the month FINANCE COUNCIL: Meets on the second Tuesday of the month ADULT BIBLE STUDY: Meets every other Monday at 7:00pm in the Parish Center; Spring study begins Jan. 20. ST. JOHN’S ON THE HILL SENIORS: Second and fourth Thursdays at 12:30pm in the Church Hall KNIGHTS OF COLUMBUS HOLY SAVIOUR COUNCIL 13710: Officers’ Meeting, 1st Monday of the month at 7:00pm; General membership meeting, 3rd Thursday of the month at 7:00pm. Meetings held at Immaculate Conception in the Hall. Contact Joe DiMarco, Grand Knight, at 302-218-5261 for information. The Sanctuary Candle burns this week in memory of Catherine Blee through the generosity of Regina Tuzio SACRAMENT OF BAPTISM: Celebrated on the First, Second and Fourth Sundays of each month at 12:45pm. Pre-Jordan baptismal instruction is held on the Third Sunday at 12:45pm. Please call or visit the Parish Office to arrange baptisms. SACRAMENT OF RECONCILIATION: Saturdays, 3:00 - 3:45pm; other times by appointment. SACRAMENT OF MATRIMONY: One party must be a registered member of Saint John Fisher Parish. Couples must make arrangements with the Pastor at least six months prior to the intended date. Weddings can be held at St. John Fisher or at Immaculate Conception Church. Couples are required to attend the Archdiocesan Sacramental Preparation Program for Marriage (four sessions) or a Pre-Cana program. PARISH REGISTRATION: We welcome new members to Saint John Fisher! If you are new to our parish, please call or visit the Parish Office to register. LETTERS OF ELIGIBILITY: If you need a Letter of Eligibility from Saint John Fisher Parish to be a sponsor for Baptism or Confirmation, you must be - a registered member of the parish, - at least 16 years of age, - a practicing Catholic (have received the Sacraments and attend Mass regularly), - If married, you must be in a ‘valid’ marriage. The pastor is unable to sign a letter unless you meet these requirements. To request a letter, please speak with Father McDermott after any Mass or call the Parish Office at 610-485-0441, ext. 2. PARISH ON-LINE GIVING: Visit the parish website, www.stjohnfisherchurch.com and click on the We Share icon to sign up for the electronic giving program. PARISH RELIGIOUS EDUCATION PROGRAM (PREP): The PREP program provides religious education for public school children from September through May. Please call the PREP office at 610-485-0441, ext. 5, for more information and to register. HOLY FAMILY REGIONAL CATHOLIC SCHOOL: Grades Pre-K through Eight. St. John Fisher parishioners support Holy Family Regional Catholic School. Children from the parish who enroll there receive the supporting parish tuition rate. Charles Hughes, Principal Kristy Cobb, Business Manager 610-494-0147 3265 Concord Road, Aston, PA www.holyfamilyaston.org **May They Rest in Peace** **ROCCO LUBERTI** “Come me to me, all of you who are weary and carry heavy burdens, and I will give you rest.” **Please Pray for Our Sick** Frank Taylor, Fred Neeson, Julie Cavachio, Frank English, Mary Lou Barile, Michael Goodnight, Catherine Helm, Hugh Casey, Cora Sitaras, Julie Migliari, Caroline Henry, Gail McCafferty, JoAnn Bezold, Rev. Greg Hickey, Christine Smith, Roseanne Cash, Anthony Valerio, Linda Johnston, Hazel Neese, Ralph Natale, Norman Falkowski, Jr., Deborah Knight, Diane Kelly, Baby Robert Sitaras, Ruth Jobe, Ruth Ann Moffett, Chip Natrin, Connie Carney, Billie Hobdell, Gloria Patterson, John Grady, Bryan Snyder, Rose Liott, Betty Ann Deshullo, Ann Walley, Gladys Kinsler, Frank Breitmayer, Mary Giambri, Velia Brill, Peg Camero, Brian Allison, Pasquale Mignogna, Rob Busch, Judy & Jerry Frezetti, Paula Davis, Pat Oakes, Hank Granville, Colleen Hughes, John Rossi, May Graves, Amy Calcagno, Susan Curl, Marie Barker, Veronica & Joe Egan, Helen Brogley, Judy Giovan, Kevin Neary, Tom O’Leary, Nancy Finn, Barbara Rogers **PRAYERS FOR THE INFIRM:** To add a name to the prayer list, please contact Renee Devine at 610-485-0441, ext. 2. People remain on our prayer list for 30 days, except for those with long-term medical issues. If you wish to add a person again, after 30 days, please contact the office. --- **MASS INTENTIONS** | Day | Date | Time | Intentions | |-----------|----------|--------|-------------------------------------------------| | Saturday | Mar. 21 | 4:00 PM| Rose Tuzio | | Sunday | Mar. 22 | 8:00 AM| People of the Parish & Visitors | | | | 9:30 AM| Jerry Lester | | | | 11:30 AM| Patricia Oakes 5th Anniv. | | Monday | Mar. 23 | 7:30 AM| George J. Ershaw | | Tuesday | Mar. 24 | 7:30 AM| Chris Vandenberg | | Wednesday | Mar. 25 | 7:30 AM| Eileen Myrtetus | | Thursday | Mar. 26 | 7:30 AM| Phillip Cerami | | Friday | Mar. 27 | 7:30 AM| Edna Mielcarek | | Saturday | Mar. 28 | 4:00 PM| Charles Brogley, Sr. | | Sunday | Mar. 29 | 8:00 AM| Margaret & Robert Curl | | | | 9:30 AM| People of the Parish & Visitors | | | | 11:30 AM| Janet Kadyszewski | Sometimes you’re sick, out of town, or go to Sunday Mass with a family member who lives in another parish. Please remember your weekly contribution. The parish counts on it! Pagano Funeral Home 3711 FOULK ROAD • GARNET VALLEY, PA 19060 ACROSS FROM BOOTH’S CORNER FARMERS MARKET 610-485-6200 Peter B. Pagano Jr., Owner/Supervisor CAR-MIRA’S DELI Party Trays for All Occasions Cold Cuts - Hoagies Open 7 Days 3240 Chichester Avenue 610-494-9799 www.carmirasdelli.com MEKENNEY’S Family Owned Since 1958 M-F 7-5 AUTOMOTIVE SERVICE GENERAL REPAIR / MAINTENANCE STATE INSPECTIONS / INSPECTIONS (610) 494-8948 2328 Chichester Avenue • Upper Chichester Kaniejski Kendus D’Anjolell Memorial Home 610-494-6220 Serving the tri-state area since 1918 JOHN BURSDALL Supervisor 3900 W. 9TH STREET • TRAINER DE Licensed Funeral Director on Staff Marcus Hook Florists & Gifts Inc. Full Service Florist • Since 1952 485-3281 • 485-3000 938 Market Street, Marcus Hook, PA 19061 Badell’s Collision Commitment to Excellence Expert Collision Repair • We Accept All Insurance Claims “WE MEET BY ACCIDENT” ASTON 610-485-4411 MALVERN 610-296-8445 ADT-Monitored Home Security Get 24-Hour Protection From a Name You Can Trust • Burglary • Fire Safety • Flood Detection • Carbon Monoxide 1-855-225-4251 Philip P. Fusco Agent 24 Hour Good Neighbor Service™ 610-485-1161 State Farm™ 3109 Chichester Avenue, Boothwyn firstname.lastname@example.org Beyer Studio Stained Glass – New and Restoration 215.848.3502 www.beyerstudio.com AVAILABLE FOR A LIMITED TIME ADVERTISE YOUR BUSINESS HERE Contact Lisa Cremia to place an ad today! email@example.com or (800) 477-4574 x6636 Too Sick for Mass? SUPPORT OUR PARISH NO MATTER WHERE YOU ARE! Sign-up to get your bulletin delivered right to your inbox! www.parishesonline.com WE’RE HIRING AD SALES EXECUTIVES • Full Time Position with Benefits • Sales Experience Preferred • Paid Training • Overnight Travel Required • Expense Reimbursement CONTACT US AT firstname.lastname@example.org www.4LPI.com/careers catholicmatch Connecticut CatholicMatch.com/CT SUPPORT THE ADVERTISERS THAT SUPPORT OUR COMMUNITY Ad info. 1-800-477-4574 • Publication Support 1-800-888-4574 • www.4LPI.com St. John Fisher Church, Boothwyn, PA 03-1236
Categories of Containers Michael Abbott\textsuperscript{1}, Thorsten Altenkirch\textsuperscript{2}, and Neil Ghani\textsuperscript{1} \textsuperscript{1} Department of Mathematics and Computer Science, University of Leicester \textsuperscript{2} School of Computer Science and Information Technology, Nottingham University Abstract. We introduce the notion of containers as a mathematical formalisation of the idea that many important datatypes consist of templates where data is stored. We show that containers have good closure properties under a variety of constructions including the formation of initial algebras and final coalgebras. We also show that containers include strictly positive types and shapely types but that there are containers which do not correspond to either of these. Further, we derive a representation result classifying the nature of polymorphic functions between containers. We finish this paper with an application to the theory of shapely types and refer to a forthcoming paper which applies this theory to differentiable types. 1 Introduction Any element of the type $\text{List}(X)$ of lists over $X$ can be uniquely written as a natural number $n$ given by the length of the list, together with a function $\{1,\ldots,n\} \to X$ which labels each position within the list with an element from $X$: $$n : \mathbb{N}, \quad \sigma : \{1..n\} \to X.$$ Similarly, any binary tree tree can be described by its underlying shape which is obtained by deleting the data stored at the leaves with a function mapping the positions in this shape to the data thus: More generally, we are led to consider datatypes which are given by a set of shapes $S$ and, for each $s \in S$, a family of positions $P(s)$. This presentation of the datatype defines an endofunctor $X \mapsto \prod_{s \in S} X^{P(s)}$ on $\textbf{Set}$. In this paper we formalise these intuitions by considering families of objects in a locally cartesian closed category $\mathbb{C}$, where the family $s : S \vdash P(s)$ is represented by an object $P \in \mathbb{C}/S$, and the associated functor $T_{S_P} : \mathbb{C} \to \mathbb{C}$ is defined by $T_{S_P} X \equiv \Sigma s : S. (P(s) \Rightarrow X)$. We begin by constructing a category $\mathcal{G}$ of “container generators”, i.e syntactic presentations of shapes and positions, and define a full and faithful functor $T$ to the category of endofunctors of $\mathbb{C}$. Given that polymorphic functions are natural transformations, full and faithfulness allows us to classify polymorphic functions between container functors in terms of their action on the shapes and positions of the underlying container generators. We show that $\mathcal{G}$ is complete and cocomplete and that limits and coproducts are preserved by $T$. This immediately shows that i) container types are closed under products, coproducts and subset types; and ii) this semantics is compositional in that the semantics of a datatype is constructed canonically from the semantics of its parts. The construction of initial algebras and final coalgebras of containers requires, firstly, the definition of containers with multiple parameters and, secondly, a detailed analysis of when $T$ preserves limits and colimits of certain filtered diagrams. We conclude the paper by relating containers to the shapely types of Jay and Cockett (1994) and Jay (1995). The definition of shapely types does not require the hypothesis of local cartesian closure which we assume, but when $\mathbb{C}$ is locally cartesian closed then it turns out that the shapely types are precisely the functors generated by the “discretely finite” containers. A container is discretely finite precisely when each of its objects of positions is locally isomorphic to a finite cardinal. Further, we gain much by the introduction of extra categorical structure, e.g. the ability to form initial algebras and final coalgebras of containers and the representation result concerning natural transformations between containers. Note also that, unlike containers, shapely types are not closed under the construction of coinductive types. (since the position object of an infinite list cannot be discretely finite). In this paper we assume that $\mathbb{C}$ is locally finitely presentable (lfp), hence complete and cocomplete, which excludes several interesting examples including Scott domains and realisability models. Here we use the lfp structure for the construction of initial algebras and final coalgebras. In future work we expect to replace this assumption with a more delicate treatment of induction using internal structure. Another application of containers is as a foundation for generic programming within a dependently typed programming framework (Altenkirch and McBride, 2002). An instance of this theme, the derivatives of functors as suggested in McBride (2001), is developed in Abbott et al. (2003) using the material presented here. The use of the word container to refer to a class of datatypes can be found in Hoogendijk and de Moor (2000) who investigated them in a relational setting: their containers are actually closed under quotienting. Containers as introduced here are closely related to analytical functors, which were introduced by Joyal, see Hasegawa (2002). Here we consider them in a more general setting by looking at locally cartesian categories with some additional properties. In the case of Set containers are a generalisation of normal functors, closing them under quotients would lead to a generalisation of analytical functors. In summary, this paper makes the following contributions: - We develop a new and generic concept of what a container is which is applicable to a wide range of semantic domains. - We give a representation theorem (Theorem 3.4) which provides a simple analysis of polymorphic functions between datatypes. - We show a number of closure properties of the category of containers which allow us to interpret all strictly positive types by containers. - We lay the foundation for a theory of generic programming; a first application is the theory of differentiable datatypes as presented in Abbott et al. (2003). - We show that Jay and Cockett’s shapely types are all containers. 2 Definitions and Notation This paper implicitly uses the machinery of fibrations (Jacobs 1999, Borceux 1994, chapter 8, etc) to develop the key properties of container categories, and in particular the fullness of the functor $T$ relies on the use of fibred natural transformations. This section collects together the key definitions and results required in this paper. Given a category with finite limits $\mathbb{C}$, refer to the slice category $\mathbb{C}/A$ over $A \in \mathbb{C}$ as the fibre of $\mathbb{C}$ over $A$. Pullbacks in $\mathbb{C}$ allow us to lift each $f : A \to B$ in $\mathbb{C}$ to a pullback or reindexing functor $f^* : \mathbb{C}/B \to \mathbb{C}/A$. Assigning a fibre category to each object of $\mathbb{C}$ and a reindexing functor to each morphism of $\mathbb{C}$ is (subject to certain coherence equations) a presentation of a fibration over $\mathbb{C}$. Composition with $f$ yields a functor $\Sigma_f : \mathbb{C}/A \to \mathbb{C}/B$ left adjoint to $f^*$. $\mathbb{C}$ is locally cartesian closed iff each fibre of $\mathbb{C}$ is cartesian closed, or equivalently, if each pullback functor $f^*$ has a right adjoint $f^* \dashv \Pi_f$. Each exponential category $\mathbb{C}^I$ can in turn be regarded as fibred over $\mathbb{C}$ by taking the fibre of $\mathbb{C}^I$ over $A \in \mathbb{C}$ equal to $(\mathbb{C}/A)^I$. Now define $[\mathbb{C}^I, \mathbb{C}^J]$ to be the category of fibred functors $F : \mathbb{C}^I \to \mathbb{C}^J$ and fibred natural transformations, where $F$ consists of functors $F_A : (\mathbb{C}/A)^I \to (\mathbb{C}/A)^J$ such that $(f^*)^I F_B \cong F_A(f^*)^I$ for each $f : A \to B$ and similarly for natural transformations. Write $a : A \vdash B(a)$ or even just $A \vdash B$ for $B \in \mathbb{C}/A$. We’ll write $a : A, b : B(a) \vdash C(a, b)$ as a shorthand for $(a, b) : \Sigma_A B \vdash C(a, b)$. When dealing with a collection $A_i$ for $i \in I$, we’ll write this as $(A_i)_{i \in I}$ or $\vec{A}$ or even just $A$. Write $\Sigma a : A$ and $\Pi a : A$ for the $\Sigma$ and $\Pi$ types corresponding to the adjoints to reindexing. Substitution in variables will be used interchangeably with substitution by pullback, so $A \vdash f^* B$ may also be written as $a : A \vdash B(f(a))$ or $a : A \vdash B(fa)$. The signs $\coprod$ and $\prod$ will be used for coproducts and products respectively over external sets, while $\Sigma$ and $\Pi$ refer to the corresponding internal constructions in $\mathbb{C}$. See Hofmann (1997) for a more detailed explanation of the interaction between type theory and semantics assumed in this paper. Limits and colimits are fibred iff they exist in each fibre and are preserved by reindexing functors. Limits and colimits in a locally cartesian closed category $\mathbb{C}$ are automatically fibred. This useful result allows us to omit the qualification that limits and colimits be “fibred” throughout this paper. When $\mathbb{C}$ is locally cartesian closed say that coproducts are disjoint (or equivalently that $\mathbb{C}$ is extensive)\footnote{For general $\mathbb{C}$, coproducts are disjoint iff coprojections are also mono, and $\mathbb{C}$ is extensive iff coproducts are disjoint and are preserved by pullbacks.} iff the pullback of distinct coprojections $\kappa_i : A_i \to \coprod_{i \in I} A_i$ into a coproduct is always the initial object 0. Henceforth, we’ll assume that $\mathbb{C}$ has finite limits, is locally cartesian closed and has disjoint coproducts. The following notion of “disjoint fibres” follows from disjoint coproducts. **Proposition 2.1.** If $\mathbb{C}$ has disjoint coproducts then the functor $\tilde{\kappa}^* : \mathbb{C}/\coprod_{i \in I} A_i \to \prod_{i \in I} (\mathbb{C}/A_i)$, taking $\coprod_{i \in I} A_i \vdash B$ to $(A_i \vdash \kappa_i^* B)_{i \in I}$, is an equivalence. Say that $\mathbb{C}$ has disjoint fibres when this holds. □ Write $\coprod : \prod_{i \in I} (\mathbb{C}/A_i) \to \mathbb{C}/\prod_{i \in I} A_i$ for the adjoint to $\tilde{\kappa}^*$ and $-\dagger-$ for the binary case. Note that $\coprod_{i \in I} B_i \cong \prod_{i \in I} \Sigma \kappa_i B_i$ for $(A_i \vdash B_i)_{i \in I} \in \prod_{i \in I} (\mathbb{C}/A_i)$. The following lemma collects together some useful identities which hold in any category considered in this paper. **Lemma 2.2.** For extensive locally cartesian closed $\mathbb{C}$ the following isomorphisms hold (IC stands for intensional choice, Cu for Curry and DF for disjoint fibrers): \[ \Pi a : A, \Sigma b : B(a). C(a, b) \cong \Sigma f : (\Pi a : A. B(a)). \Pi a : A. C(a, fa) \tag{IC1} \] \[ \Pi_{i \in I} \Sigma b : B_i. C_i(b) \cong \Sigma a : \prod_{i \in I} B_i. \prod_{i \in I} C_i(\pi_i a) \tag{IC2} \] \[ \Pi a : A. (B(a) \Rightarrow C) \cong (\Sigma a : A. B(a)) \Rightarrow C \tag{Cu1} \] \[ \Pi_{i \in I} (B_i \Rightarrow C) \cong (\prod_{i \in I} B_i) \Rightarrow C \tag{Cu2} \] \[ (\prod_{i \in I} B_i)(\kappa_i a) \cong B_i(a) \tag{DF1} \] \[ \Pi_{i \in I} \Sigma a : A_i. C(\kappa_i a) \cong \Sigma a : \prod_{i \in I} A_i. C(a) \tag{DF2} \] For technical convenience, a choice of pullbacks is assumed in $\mathbb{C}$ (this ensures that our fibrations are cloven). Finally, note that we make essential use of classical set theory with choice in the meta-theory in theorem 5.6 and proposition 6.6. It should be possible to avoid this dependency by developing more of the theory internally to $\mathbb{C}$. ### 3 Basic Properties of Containers The basic notion of a container generator is a dependent pair of types $A \vdash B$ creating a functor $T_{A \vdash B} X \equiv \Sigma a : A. (B(a) \Rightarrow X)$. In order to understand a morphism of containers, consider the map $\text{tail} : \text{List} X \to 1 + \text{List} X$ taking the empty list to 1 and otherwise yielding the tail of the given list: \[ \begin{array}{c} \text{tail of list} \\ \xymatrix{ x_1 \ar[r] & x_2 \ar[r] & x_3 \ar@/^/[rr] & & x_2 \ar[r] & x_3 } \end{array} \] This map is defined by i) a choice of shape in $1 + \text{List} X$ for each shape in $\text{List} X$; and ii) for each position in the chosen shape a position in the original shape. Thus a morphism of containers $(A \vdash B) \to (C \vdash D)$ is a pair of morphisms $(u : A \to C, f : u^* D \to B)$. With this definition of a category $\mathcal{G}$ of container generators we can construct a full and faithful functor $T : \mathcal{G} \to [\mathbb{C}, \mathbb{C}]$ and show the completeness properties discussed in the introduction. However, when constructing fixed points it is also necessary to take account of containers with parameters, so we define $T : \mathcal{G}_I \to [\mathbb{C}^I, \mathbb{C}]$ for each parameter index set $I$. For the purposes of this paper the index set $n$ or $I$ will generally be a finite set, but this makes little difference. Indeed, it is straightforward to generalise the development in this paper to the case where containers are parameterised by internal index objects $I \in \mathbb{C}$: when $\mathbb{C}$ has enough coproducts nothing is lost by doing this, since $\mathbb{C}^I \simeq \mathbb{C}/\prod_{i \in I} 1$. This generalisation will be important for future developments of this theory, but is not required in this paper. **Definition 3.1.** Given an index set $I$ define the category of container generators $\mathcal{G}_I$ as follows: - Objects are pairs $(A \in \mathbb{C}, B \in (\mathbb{C}/A)^I)$; write this as $(A \triangleright B) \in \mathcal{G}_I$ - A morphism $(A \triangleright B) \to (C \triangleright D)$ is a pair $(u, f)$ for $u: A \to C$ in $\mathbb{C}$ and $f: (u^*)^I D \to B$ in $(\mathbb{C}/A)^I$. A container $(A \triangleright B) \in \mathcal{G}_I$ can be written using type theoretic notation as $$\vdash A \quad i:I, a:A \vdash B_i(a).$$ A morphism $(u, f): (A \triangleright B) \to (C \triangleright D)$ can be written in type theoretic notation as $$u:A \to C \quad i:I, a:A \vdash f_i(a):D_i(ua) \to B_i(a).$$ Finally, each $(A \triangleright B) \in \mathcal{G}_I$, thought of as a syntactic presentation of a datatype, generates a fibred functor $T_{A \triangleright B}: \mathbb{C}^I \to \mathbb{C}$ which is its semantics. **Definition 3.2.** Define the container construction functor $T: \mathcal{G}_I \to [\mathbb{C}^I, \mathbb{C}]$ as follows. Given $(A \triangleright B) \in \mathcal{G}_I$ and $X \in \mathbb{C}^I$ define $$T_{A \triangleright B} X \equiv \Sigma a:A. \prod_{i \in I} (B_i(a) \Rightarrow X_i),$$ and for $(u, f): (A \triangleright B) \to (C \triangleright D)$ define $T_{u,f}: T_{A \triangleright B} \to T_{C \triangleright D}$ to be the natural transformation $T_{u,f} X: T_{A \triangleright B} X \to T_{C \triangleright D} X$ thus: $$(a, g): T_{A \triangleright B} X \vdash T_{u,f} X(a, g) \equiv (u(a), (g_i \cdot f_i)_{i \in I}).$$ The following proposition follows more or less immediately by the construction of $T$. **Proposition 3.3.** For each container $F \in \mathcal{G}_I$ and each container morphism $\alpha: F \to G$ the functor $T_F$ and natural transformation $T_\alpha$ are fibred over $\mathbb{C}$. □ By making essential use of the fact that the natural transformations in $[\mathbb{C}^I, \mathbb{C}]$ are fibred (c.f. section 2) we can show that $T$ is full and faithful. **Theorem 3.4.** The functor $T: \mathcal{G}_I \to [\mathbb{C}^I, \mathbb{C}]$ is full and faithful. **Proof.** To show that $T$ is full and faithful it is sufficient to lift each natural transformation $\alpha: T_{A \triangleright B} \to T_{C \triangleright D}$ in $[\mathbb{C}^I, \mathbb{C}]$ to a map $(u_\alpha, f_\alpha): (A \triangleright B) \to (C \triangleright D)$ in $\mathcal{G}_I$ and show this construction is inverse to $T$. Given $\alpha: T_{A \triangleright B} \to T_{C \triangleright D}$ construct $\ell \equiv (a', \text{id}_{B_i(a')}) \in T_{A \triangleright B} B$ in the fibre $\mathbb{C}/A$ (or in terms of type theory, add $a': A$ to the context). We can now construct $\alpha B \cdot \ell \in T_{C \triangleright D} B = \Sigma c: C. \prod_{i \in I} (D_i(c) \Rightarrow B_i(a'))$ in the same context, and write $\alpha B \cdot \ell = (u_\alpha, f_\alpha)$ where $u_\alpha(a'): C$ and $f_\alpha(a'): \prod_{i \in I} (D_i(u_\alpha a') \Rightarrow B_i(a'))$ for $a': A$. Thus $(u_\alpha, f_\alpha)$ can be understood as a morphism $(A \triangleright B) \to (C \triangleright D)$ in $\mathcal{G}_I$. It remains to show that this construction is inverse to $T$. When $\alpha = T_{u,f}$, just evaluate $\alpha B \cdot \ell = (u a', \text{id} \cdot f)$, which corresponds to the original map $(u, f)$. To show in general that \( \alpha = T_{u_\alpha, f_\alpha} \), let \( X \in \mathbb{C}^I \), \( a : A \) and \( g : \prod_{i \in I} (B_i(a) \Rightarrow X_i) \) be given, consider the diagram \[ \begin{array}{c} 1 \xrightarrow{\ell} T_{A \triangleright B} B \xrightarrow{T_{A \triangleright B} g} T_{A \triangleright B} X \\ \downarrow \alpha B \quad \downarrow \alpha X \\ T_{C \triangleright D} B \xrightarrow{T_{C \triangleright D} g} T_{C \triangleright D} X \\ \end{array} \] and evaluate \[ \alpha X \cdot (a, g) = \alpha X \cdot T_{A \triangleright B} g \cdot \ell = T_{C \triangleright D} g \cdot \alpha B \cdot \ell = T_{C \triangleright D} g \cdot (u_\alpha a, f_\alpha(a)) \\ = (u_\alpha a, g \cdot f_\alpha(a)) = T_{u_\alpha, f_\alpha} X \cdot (a, g) \] This shows that \( \alpha = T_{u_\alpha, f_\alpha} \) as required. This theorem gives a particularly simple analysis of polymorphic functions between container functors. For example, it is easy to observe that there are precisely \( n^m \) polymorphic functions \( X^n \to X^m \): the data type \( X^n \) is the container \( (1 \triangleright n) \) and hence there is a bijection between polymorphic functions \( X^n \to X^m \) and functions \( m \to n \). Similarly, any polymorphic function \( \text{List}X \to \text{List}X \) can be uniquely written as a function \( u : \mathbb{N} \to \mathbb{N} \) together with for each natural number \( n : \mathbb{N} \) a function \( f_n : un \to n \). ### 4 Limits and Colimits of Containers It turns out that each \( \mathcal{G}_f \) inherits completeness and cocompleteness from \( \mathbb{C} \), and that \( T \) preserves completeness. Preservation of cocompleteness is more complex, and only a limited class of colimits are preserved by \( T \). **Proposition 4.1.** If \( \mathbb{C} \) has limits and colimits of shape \( \mathbb{J} \) then \( \mathcal{G}_f \) has limits of shape \( \mathbb{J} \) and \( T \) preserves these limits. **Proof.** We’ll proceed by appealing to the fact that \( T \) reflects limits (since it is full and faithful), and the proof will proceed separately for products and equalisers. *Products.* Let \( (A_k \triangleright B_k)_{k \in K} \) be a family of objects in \( \mathcal{G}_f \) and compute (the labels refer to lemma 2.2) \[ \prod_{k \in K} T_{A_k \triangleright B_k} X = \prod_{k \in K} \Sigma a : A_k \prod_{i \in I} (B_{k,i}(a) \Rightarrow X_i) \\ \simeq \Sigma a : \prod_{k \in K} A_k \cdot \prod_{k \in K} \prod_{i \in I} (B_{k,i}(\pi_k a) \Rightarrow X_i) \tag{IC2} \\ \simeq \Sigma a : \prod_{k \in K} A_k \cdot \prod_{i \in I} \left( (\prod_{k \in K} B_{k,i}(\pi_k a)) \Rightarrow X_i \right) \tag{Cu2} \\ = T_{\prod_{k \in K} A_k \triangleright \prod_{k \in K} (\pi_k^*)^l B_k} X \] showing by reflection along \( T \) that \[ \prod_{k \in K} (A_k \triangleright B_k) \cong \left( \prod_{k \in K} A_k \triangleright \prod_{k \in K} (\pi_k^*)^l B_k \right). \] **Equalisers.** Given parallel maps \((u, f), (v, g) : (A \triangleright B) \rightrightarrows (C \triangleright D)\) construct \[ (E \triangleright Q) \xrightarrow{(e, q)} (A \triangleright B) \xrightarrow{(u, f)} (C \triangleright D) \] where \(e\) is the equaliser in \(\mathbb{C}\) of \(u, v\) and \(q\) is the coequaliser in \((\mathbb{C}/E)^I\) of \((e^*)^I f, (e^*)^I g\). To show that \(T_{e, q}\) is the equaliser of \(T_{u, f}, T_{v, g}\) fix \(X \in \mathcal{G}\), \(U \in \mathbb{C}\) and let \(\alpha : U \to T_{A \triangleright B} X\) be given equalising this parallel pair at \(X\). For \(x : U\) write \(\alpha(x) = (a, h)\) where \(a : A, h : \prod_{i \in I} (B_i(a) \Rightarrow X_i)\). The condition on \(\alpha\) tells us that \(u(a) = v(a)\) and so there is a unique \(y : E\) with \(a = e(y)\). Similarly we know that \(h \cdot f(e y) = h \cdot g(e y)\) and in particular there is a unique \(k : Q(y) \to X\) with \(h = k \cdot q\). The assignment \(x \mapsto (y, k)\) defines a map \(\beta : U \to T_{E \triangleright Q} X\) giving a unique factorisation of \(\alpha\), showing that \(T_{e, q} X\) is an equaliser and hence so is \((e, q)\). \(\square\) In particular, this result tells us that the limit in \([\mathcal{C}^I, \mathbb{C}]\) of a diagram of container functors is itself a container functor. It’s nice to see that coproducts of containers are also well behaved. **Proposition 4.2.** If \(\mathbb{C}\) has products and coproducts of size \(K\) then \(\mathcal{G}_I\) has coproducts of size \(K\) preserved by \(T\). **Proof.** Given a family \((A_k \vdash B_k)_{k \in K}\) of objects in \(\mathcal{G}_I\) calculate (making essential use of disjointness of fibres): \[ \coprod_{k \in K} T_{A_k \vdash B_k} X = \coprod_{k \in K} \Sigma a : A_k \cdot \prod_{i \in I} (B_{k,i}(a) \Rightarrow X_i) \] \[ \simeq \coprod_{k \in K} \Sigma a : A_k \cdot \prod_{i \in I} \left( \left( \coprod_{k' \in K} B_{k',i} \right)(\kappa_k a) \Rightarrow X_i \right) \tag{DF1} \] \[ \simeq \Sigma a : \coprod_{k \in K} A_k \cdot \prod_{i \in I} \left( \left( \coprod_{k \in K} B_{k,i} \right)(a) \Rightarrow X_i \right) \tag{DF2} \] \[ = T_{\coprod_{k \in K} A_k \triangleright \left( \coprod_{k \in K} B_{j,i} \right)_{i \in I}} X \] showing by reflection along \(T\) that \[ \coprod_{k \in K} (A_k \triangleright B_k) \cong \left( \coprod_{k \in K} A_k \triangleright \coprod_{k \in K} B_k \right). \] \(\square\) The fate of coequalisers is more complicated. It turns out that \(\mathcal{G}_I\) has coequalisers when \(\mathbb{C}\) has both equalisers and coequalisers, but they are not preserved by \(T\). The following proposition is presented without proof (the construction of coequalisers in \(\mathcal{G}\) is fairly complex and is not required in this paper). **Proposition 4.3.** If \(\mathbb{C}\) has equalisers and coequalisers then \(\mathcal{G}_I\) has coequalisers. \(\square\) The following example shows that coequalisers are not preserved by $T$. **Example 4.4.** Consider the following coequaliser diagram in $[\mathbb{C}, \mathbb{C}]$ \[ \begin{array}{ccc} X \times X & \xrightarrow{\text{id}_{X \times X}} & X \times X \\ \downarrow & & \downarrow \\ (X, y) \sim (y, x) & & (X \times X)/ \sim \end{array} \] where $(x, y) \sim (y, x)$. The functor $X \mapsto X \times X$ is a container functor generated by $(1 \triangleright 2)$, and the coequaliser of the corresponding parallel pair in $\mathcal{G}_1$ is the container $(1 \triangleright 0)$. Note however that $T_{1 \triangleright 0} X \cong 1 \not\cong (X \times X)/ \sim$. Unfortunately, filtered colimits aren’t preserved by $T$ either. **Example 4.5.** Consider the $\omega$-chain in $\mathcal{G}_1$ given by $n \mapsto (1 \triangleright A^n)$ (for fixed $A$) on objects and $(n \to n + m) \mapsto \pi_{n,m}: A^{n+m} \cong A^n \times A^m \to A^n$ on maps. The filtered colimit of this diagram can be computed in $\mathcal{G}_1$ to be $(1 \triangleright A^\mathbb{N})$. However, applying $T$ to this diagram produces the $\omega$-chain \[ \begin{array}{cccccc} X & \xrightarrow{X^{\pi_{0,1}}} & X^A & \xrightarrow{X^{\pi_{1,1}}} & X^{A^2} & \xrightarrow{X^{\pi_{2,1}}} & \cdots \end{array} \] and the colimit of this chain in $\textbf{Set}$ is strictly smaller than $X^{A^\mathbb{N}}$. ## 5 Filtered Colimits of Cartesian Diagrams Although $\mathcal{G}_f$ has colimits they are not preserved by $T$, and this also applies to filtered colimits. As we will want to use filtered colimits for the construction of initial algebras, this is a potential problem. Fortunately, there exists a class of filtered colimit diagrams which is both sufficient for the construction of initial algebras and which are preserved by $T$. Throughout this section take $\mathbb{C}$ to be finitely accessible ($\mathbb{C}$ has filtered colimits and a generating set of finitely presentable objects, Adámek and Rosický 1994) as well as being locally cartesian closed. **Definition 5.1.** A morphism $(u, f)$ in $\mathcal{G}_f$ is cartesian iff $f$ is an isomorphism\footnote{$(u, f)$ is cartesian with respect to this definition precisely when it is cartesian (in the sense of fibrations) with respect to the projection functor $\pi: \mathcal{G}_f \to \mathbb{C}$ taking $(A \triangleright B)$ to $A$.}. For each $u$ there is a bijection between cartesian morphisms $(u, f): (A \triangleright B) \to (C \triangleright D)$ in $\mathcal{G}_f$ and morphisms $\tilde{f}$ in $\mathbb{C}^f$ making each square below a pullback: \[ \begin{array}{ccc} B_i & \xrightarrow{\tilde{f}_i} & D_i \\ \downarrow & & \downarrow \\ A & \xrightarrow{u} & C \end{array}. \] We can also translate the notion of cartesian morphism into natural transformations between container functors: a natural transformation $\alpha : T_{A \downarrow B} \to T_{C \downarrow D}$ derives from a cartesian map iff the naturality squares of $\alpha$ are all pullbacks (such natural transformations are often also called *cartesian*, in this case with respect to the “evaluation at 1” functor). Define $\mathcal{G}_f$ to have the same objects as $\mathcal{G}_I$ but only cartesian arrows as morphisms. We will show that $\mathcal{G}_f$ has filtered colimits which are preserved by $T$ (when restricted to $\mathcal{G}_f$), and hence also by the inclusion $\mathcal{G}_f \hookrightarrow \mathcal{G}_I$. The lemma below follows directly from the corresponding result in $\textbf{Set}$ and helps us work with maps from finitely presentable objects to filtered colimits (write $\bigvee D$ for the colimit of a filtered diagram $D$). **Lemma 5.2.** Let $D : J \to C$ be a filtered diagram with colimiting cone $d : D \to \bigvee D$ and let $U$ be finitely presentable. 1. For each $\alpha : U \to \bigvee D$ there exists $J \in J$ and $\alpha_J : U \to DJ$ such that $\alpha = d_J \cdot \alpha_J$. 2. Given $\alpha : U \to DI$, $\beta : U \to DJ$ such that $d_J \cdot \alpha = d_J \cdot \beta$ there exists $K \in J$ and maps $f : I \to K$, $g : J \to K$ such that $Df \cdot \alpha = Dg \cdot \beta$. □ Before the main result we need a technical lemma about filtered colimits in finitely accessible categories. **Lemma 5.3.** Given a filtered diagram in $C^\to$ with every edge a pullback then the arrows of the colimiting cone are also pullbacks. **Proof.** We need to show, for each $I \in C$, that the square \[ \begin{array}{ccc} EI & \xrightarrow{e_I} & \bigvee E \\ \alpha_I \downarrow & & \downarrow \bar{\alpha} \\ DI & \xrightarrow{d_I} & \bigvee D \end{array} \] is a pullback, where $E \xrightarrow{\alpha} D$ is the diagram, $(d,e)$ are the components of its colimiting cone and $\bar{\alpha}$ is the factorisation of $d \cdot \alpha$ through $e$. So let a cone $DI \xleftarrow{a} U \xrightarrow{b} \bigvee E$ satisfying $d_I \cdot a = \bar{\alpha} \cdot b$ be given. Without loss of generality we can assume that $U$ is finitely presentable and we can now appeal to lemma 5.2 above. Construct first $b_J : U \to EJ$ such that $b = e_J \cdot b_J$; then as $d_J \cdot a = \bar{\alpha} \cdot e_J \cdot b_J = d_J \cdot (\alpha_J \cdot b_J)$ there exist $f : I \to K$, $g : J \to K$ with $Df \cdot a = Dg \cdot \alpha_J \cdot b_J = \alpha_K \cdot Eg \cdot b_J$ and so we can construct a factorisation $b_I : U \to EI$ through the pullback over $f$ satisfying $\alpha_I \cdot b_I = a$ and $Ef \cdot b_I = Eg \cdot b_J$. This is a factorisation of $(a,b)$ since $e_I \cdot b_I = e_K \cdot Ef \cdot b_I = e_K \cdot Eg \cdot e_J = e_J \cdot b_J = b$. This factorisation is unique. Let $b, b' : U \Rightarrow EI$ be given such that $e_I \cdot b = e_I \cdot b'$. Then there exist $f, f' : I \Rightarrow J$ with $Ef \cdot b = Ef' \cdot b'$; but indeed there exists $g : J \to K$ with $h \equiv g \cdot f = g \cdot f'$ and so $Eh \cdot b = Eh \cdot b'$. As the square over $h$ is a pullback we can conclude $b = b'$. □ Now we are in a position to state the main result, that the filtered colimit of a cartesian diagram of container functors is itself a container functor. **Proposition 5.4.** For each set $I$ the category $\mathcal{G}_I$ has filtered colimits which are preserved by $T$. **Proof.** Let a diagram $(D \triangleright E) : \mathbb{J} \to \mathcal{G}_I$ be given, i.e. for each $K \in \mathbb{J}$ there is a container $(DK \triangleright EK)$ and for each $f : K \to L$ a cartesian container morphism $(Df, Ef)$. For each $f : K \to L$ in $\mathbb{J}$, write $\bar{E}f$ for the map $\bar{E}K \to \bar{E}L$ derived from cartesian $Ef$ so that we get the left hand pullback square below: \[ \begin{array}{ccc} EK & \xrightarrow{\bar{E}f} & EL \\ \downarrow & & \downarrow \\ DK & \xrightarrow{\bar{D}f} & DL \end{array} \xrightarrow{\bar{q}} \bigvee \bar{E} \quad . \] After taking the colimits shown (with colimiting cones $d$ and $\bar{e}$), we know from lemma 5.3 that the right hand square is also a pullback and we can interpret the right hand side as a container together with a cartesian cone $(d, e) : (D \triangleright E) \to (\bigvee D \triangleright \bigvee \bar{E})$. It remains to show that $T_{\bigvee D \triangleright \bigvee \bar{E}} \cong \bigvee T_{D \triangleright E}$, so let a cone $f : T_{D \triangleright E}X \to U$ be given as shown below, where the map $k_K$ takes $(a, g)$ to $(a_K(a), g)$, using the isomorphism $(\bigvee \bar{E})_i(d_K(a)) \cong EK_i(a)$ (for $K \in \mathbb{J}, i : I, a : DK_j$) derived from $(d, e)$ cartesian. \[ \Sigma a : DK_j \cdot \prod_{i \in I}(EK_i(a) \Rightarrow X_i) \xrightarrow{k_K} \Sigma a : \bigvee D \cdot \prod_{i \in I}((\bigvee \bar{E})_i(a) \Rightarrow X_i) \] To construct $h$ let $a : \bigvee D$ and $g : \prod_{i \in I}((\bigvee D)_i(a) \Rightarrow X_i)$ be given and choose $K \in \mathbb{J}, a_K \in DK$ such that $a = d_K(a_K)$, and so we have $(a_K, g) : T_{DK \triangleright EK}X$ and can compute $h(a, g) \equiv f_K(a_K, g)$; this construction of $h(a, g)$ is unique and independent of the choice of $K$ and $a_K$. □ Finally the above proposition can be applied to the construction of fixed points on $\mathcal{G}_I$. **Definition 5.5.** Say that an endofunctor $F$ on a category with filtered colimits has rank iff there exists a cardinal $\aleph$, the rank of $F$, such that $F$ preserves $\aleph$-filtered colimits. The following theorem is a variant of Adámek and Koubek (1979). **Theorem 5.6 (Adámek).** If a category $\mathbb{C}$ has an initial object and colimits of all filtered diagrams then every endofunctor on $\mathbb{C}$ with rank has an initial algebra. If $G : \mathbb{C} \to \mathbb{D}$ preserves the initial object and all filtered colimits then any endofunctor $F' : \mathbb{D} \to \mathbb{D}$ satisfying $F'G \cong GF$ for some endofunctor $F$ on $\mathbb{C}$ with rank has an initial algebra given by the image under $G$ of the initial algebra of $F$. □ The construction of initial algebras in $\mathcal{G}$ now follows as a corollary of the above. **Theorem 5.7.** Let $F$ be an endofunctor on $\mathcal{G}_I$ such that $F$ restricts to an endofunctor $\hat{F}$ on $\mathcal{G}_I$ (i.e., $F$ preserves cartesian morphisms) and such that $\hat{F}$ has rank, then $F$ has an initial algebra $\mu F \in \mathcal{G}_I$ which is preserved by $T$. **Proof.** We’ve established that $\mathcal{G}_I$ has filtered colimits which are preserved by $\mathcal{G}_I \hookrightarrow \mathcal{G}_I$ and by $T$ and it’s clear that the initial object of $\mathcal{G}_I$ is initial in $\mathcal{G}_I$ and is also preserved by $T$ and so we can apply theorem 5.6. □ As noted in section 2 it would be desirable to have a constructive version of this theorem, probably along the lines suggested by Taylor (1999, Section 6.7). ### 6 Fixed Points of Containers Categories of containers are, under suitable assumptions, closed under the operations of taking least and greatest fixed points, or in other words given a container functor $F(\vec{X}, Y)$ in $n + 1$ parameters the types $\mu Y.F(\vec{X}, Y)$ and $\nu Y.F(\vec{X}, Y)$ are containers (in $n$ parameters). The least and greatest fixed points of a type are defined by repeated substitution, for example the type $\nu Y.F(\vec{X}, Y)$ can be constructed as the limit of the $\omega$-chain $$1 \leftarrow F(\vec{X}, 1) \leftarrow F(\vec{X}, F(\vec{X}, 1)) \leftarrow \cdots \leftarrow \lim_{n < \omega} F^n[1]$$ where we write $F[Y] \equiv F(\vec{X}, Y)$ (note that the $\nu$ type only needs $\omega$-limits for its construction, but as discussed below, $\mu$ types can require colimits of transfinite chains\footnote{For example, the type of $\omega$-branching trees, $\mu Y.X + (\mathbb{N} \Rightarrow Y)$, cannot be constructed using only $\omega$-colimits.}). Therefore the first thing we need to do is to define the composition of two containers. Given containers $F \in \mathcal{G}_{I+1}$ and $G \in \mathcal{G}_I$ we can compose their images under $T$ to construct the functor $$T_F[T_G] \equiv (\mathbb{C}^I \xrightarrow{(\text{id}_{\mathbb{C}^I}, T_G)} \mathbb{C}^I \times \mathbb{C} \cong \mathbb{C}^{I+1} \xrightarrow{T_F} \mathbb{C}) .$$ This composition can be lifted to a functor $-[-]: \mathcal{G}_{I+1} \times \mathcal{G}_I \to \mathcal{G}_I$ as follows. For a container in $\mathcal{G}_{I+1}$ write $(A \triangleright B, E) \in \mathcal{G}_{I+1}$, where $B \in (\mathbb{C}/A)^I$ and $E \in \mathbb{C}/A$ and define: $$(A \triangleright B, E)[(C \triangleright D)] \equiv (a:A, f:E(a) \Rightarrow C \triangleright (B_i(a) + \Sigma e:E(a).D_i(fe))_{i \in I}) .$$ In other words, given type constructors $F(\vec{X}, Y)$ and $G(\vec{X})$ this construction defines the composite type constructor $F[G](\vec{X}) \equiv F(\vec{X}, G(\vec{X}))$. **Proposition 6.1.** Composition of containers commutes with composition of functors thus: \( T_F[T_G] \cong T_{F[G]} \). **Proof.** Calculate (for conciseness we write exponentials using superscripts where convenient and write \( \Sigma_A \) for \( \Sigma a : A \), throughout, eliding the parameter \( a \)): \[ T_{A \triangleright B, E}[T_{C \triangleright D}]X = \Sigma_A \left( \left( \prod_{i \in I} X_i^{B_i} \right) \times (E \Rightarrow \Sigma c : C. \prod_{i \in I} X_i^{D_i(c)}) \right) \] \[ \simeq \Sigma_A \left( \left( \prod_{i \in I} X_i^{B_i} \right) \times (\Sigma f : C^E. \Pi e : E. \prod_{i \in I} X_i^{D_i(fe)}) \right) \tag{IC1} \] \[ \simeq \Sigma_A \Sigma f : C^E. \prod_{i \in I} \left( X_i^{B_i} \times (\Pi e : E. X_i^{D_i(fe)}) \right) \] \[ \simeq \Sigma_A \Sigma f : C^E. \prod_{i \in I} \left( (B_i + \Sigma e : E. D_i(fe)) \Rightarrow X_i \right) \tag{Cu1, Cu2} \] \[ \simeq T_{(A \triangleright B, E)[C \triangleright D]}X . \] As all the above isomorphisms are natural in \( X \) we get the desired isomorphism of functors. \( \square \) The next lemma is useful for the construction of both least and greatest fixed points and has other applications. In particular, \( T_F \) preserves both pullbacks and cofiltered limits. **Lemma 6.2.** For \( (A \triangleright B) \in \mathcal{G}_I \) the functor \( T_{A \triangleright B} \) preserves limits of connected non-empty diagrams (connected limits). **Proof.** Since \( \prod \) and \( \Rightarrow \) preserve limits, it is sufficient to observe that \( \Sigma_A \) preserves connected limits, which is noted, for example, in Carboni and Johnstone (1995). \( \square \) **Corollary 6.3.** For each \( F \in \mathcal{G}_{I+1} \) the functor \( F[-] : \mathcal{G}_I \to \mathcal{G}_I \) preserves connected limits. **Proof.** Let \( D \) be a non-empty connected diagram, then since \( T_F \) preserves connected limits it is easy to see that \( T_F[\lim D] \cong \lim(T_F[D]) \). Since \( T \) preserves limits we can calculate \[ T_F[\lim D] \cong T_F[T_{\lim D}] \cong T_F[\lim T_D] \cong \lim(T_F[T_D]) \cong \lim T_F[D] \cong T_{\lim(F[D])} \] and so by reflection along \( T \) conclude that \( F[\lim D] \cong \lim(F[D]) \). \( \square \) We can immediately conclude that if \( \mathbb{C} \) is complete and cocomplete (in fact, \( \omega \)-limits and colimits are sufficient) then containers have final coalgebras. **Theorem 6.4.** Each \( F \in \mathcal{G}_{I+1} \) has a final coalgebra \( \nu F \in \mathcal{G}_I \) which is preserved by \( T \) (and so satisfies \( T_{\nu F} \cong \nu T_F \)). **Proof.** Since \( F[-] \) preserves limits of \( \omega \)-chains the final coalgebra of \( F \) can be constructed as the limit \( \lim_{\leftarrow n < \omega} F^n[1] \), and since \( T \) preserves this limit the fixed point is also preserved by \( T \), by the dual of theorem 5.6 \( \square \) For the construction of least fixed points (or initial algebras) two more preliminary results are needed. First we need to show that the construction of the fixed point can be restricted to $\hat{\mathcal{G}}$, so that we know that it will be preserved by $T$. **Proposition 6.5.** The functor $-[-]:\mathcal{G}_{I+1} \times \mathcal{G}_I \to \mathcal{G}_I$ restricts to a functor on the category of cartesian container morphisms, $-[-]:\hat{\mathcal{G}}_{I+1} \times \hat{\mathcal{G}}_I \to \hat{\mathcal{G}}_I$. **Proof.** It is sufficient to show that when $\alpha:F \to F'$ and $\beta:G \to G'$ are both cartesian then so is $\alpha[\beta]$, and indeed it is sufficient to show that $T_\alpha[T_\beta]$ is a cartesian natural transformation. This follows immediately from the fact that $T_F$ preserves pullbacks and that $T_\alpha$ and $T_\beta$ are cartesian natural transformations. □ Secondly we need to show that $F[-]$ has rank. Assume from now to the end of this section that $\mathbb{C}$ is a finitely\footnote{The qualification finitely is not strictly necessary here.} accessible category. **Proposition 6.6.** When $\mathbb{C}$ is finitely accessible, every container functor has rank. **Proof.** Let $(A \triangleright B) \in \mathcal{G}_I$ be a container. We first need to establish the result $$\prod_{i \in I} \left( B_i \Rightarrow \bigvee_{j \in J} X_{j,i} \right) \cong \bigvee_{j \in J} \prod_{i \in I} (B_i \Rightarrow X_{j,i})$$ for sufficiently large $K$ and $K$-filtered $J$, which we do by appealing to two results of Adámek and Rosický (1994). First, we know (from their theorem 2.39) that each functor category $\mathbb{C}^I$ is accessible, and secondly we know from their proposition 2.23 that each functor with an adjoint between accessible categories has rank. Now since $\Sigma_A$ preserves colimits we can conclude that $T_{A \triangleright B}$ has rank. □ **Corollary 6.7.** For each $F \in \mathcal{G}_{I+1}$ the endofunctor $F[-]$ on $\mathcal{G}_I$ restricts to an endofunctor on $\hat{\mathcal{G}}_I$ with rank. **Proof.** Let $K$ be the rank of $T_F$ and let $D$ be an $K$-filtered diagram in $\hat{\mathcal{G}}$. We know that $T_F[-]$ will preserve $\bigvee D$ so we can now repeat the calculation of corollary 6.3 to conclude that $F[-]$ also has rank $K$. □ That containers have least fixed points now follows from corollary 6.7 and theorem 5.7. **Theorem 6.8.** Each $F \in \mathcal{G}_{I+1}$ has a least fixed point $\mu F \in \mathcal{G}_I$ satisfying $T_{\mu F} \cong \mu T_F$. □ ## 7 Strictly Positive Types We now return to the point that all strictly positive types can be described as containers. **Definition 7.1.** A strictly positive type in $n$ variables (Abel and Altenkirch, 2000) is a type expression (with type variables $X_1, \ldots, X_n$) built up according to the following rules: - if $K$ is a constant type (with no type variables) then $K$ is a strictly positive type; - each type variable $X_i$ is a strictly positive type; - if $U, V$ are strictly positive types then so are $U + V$ and $U \times V$; - if $K$ is a constant type and $U$ a strictly positive type then $K \Rightarrow U$ is a strictly positive type; - if $U$ is a strictly positive type in $n+1$ variables then $\mu X.U$ and $\nu X.U$ are strictly positive types in $n$ variables (for $X$ any type variable). Note that the type expression for a strictly positive type $U$ can be interpreted as a functor $U : \mathbb{C}^n \to \mathbb{C}$, and indeed we can see that each strictly positive type corresponds to a container in $\mathcal{G}_n$. Let strictly positive types $U, V$ be represented by containers $(A \triangleright B)$ and $(C \triangleright D)$ respectively, then the table below shows the correspondence between strictly positive types and containers\footnote{We write $\delta_{i,j} \equiv 1$ iff $i = j$ and $\delta_{i,j} \equiv 0$ otherwise.}. \[ \begin{align*} K &\mapsto (K \triangleright 0) \\ U + V &\mapsto (A + C \triangleright B \oplus D) \\ U \times V &\mapsto (a:A, c:C \triangleright B(a) \times D(c)) \\ K \Rightarrow U &\mapsto (f:K \Rightarrow A \triangleright \Sigma k:K.B(fk)) \end{align*} \] The construction of fixed points is a bit more difficult to describe in type-theoretic terms. Let $W$ be represented by $(A \triangleright B, E) \in \mathcal{G}_I$ (see section 6), then for any fixed point $C$ of $T_{A \triangleright E}$ with $\Phi : T_{A \triangleright E} C \cong C$ we can define $D_C \vdash D_C$ as the initial solution of \[ D_C(\Phi(a,f)) \cong B(a) + \Sigma e:E.D_C(fe) \quad ; \tag{*} \] we can now define \[ \begin{align*} \mu X.W &\mapsto (\mu X.T_{A \triangleright E} X \triangleright D_{\mu X.T_{A \triangleright E} X}) \\ \nu X.W &\mapsto (\nu X.T_{A \triangleright E} X \triangleright D_{\nu X.T_{A \triangleright E} X}) \quad . \end{align*} \] All the initial and terminal (co)algebras used above can be constructed explicitly using the results of section 6. It is interesting to note that $\mu$ and $\nu$ only differ in the type of shapes but that the type of positions can be defined uniformly. Indeed, consider $F(X) = \mu Y.1 + X \times Y$, then $\mu X.F(X)$ is the type of lists and as we have already observed the type of shapes is isomorphic to $\mathbb{N} \cong \mu X.1 + X$ and the family of positions over $n$ can be conveniently described by $P(n) = \{ i \mid i < n \}$. Dually, $\nu X.F(X)$ is the type of lazy (i.e. potentially infinite) lists. The type of shapes is given by $\mathbb{N}^{\text{co}} = \nu X.1 + X$, the conatural numbers, which contain a fixed point of the successor $\omega = s(\omega) : \mathbb{N}^{\text{co}}$. Hence $P(\omega) \cong \mathbb{N}$ and this represents the infinite lists whose elements can be indexed by the natural numbers. Had we used the terminal solution of (*) to construct the type of positions, then the representation of infinite lists would incorrectly have an additional infinite position. In the reverse direction it seems that there are containers which do not correspond to strictly positive types. A probable counterexample is the type of nests, defined as the least solution to the equation \[ N(Y) \cong 1 + Y \times N(Y \times Y) \quad . \] The datatype $N$ is a container since it can be written as $N(X) \cong \Sigma n : \mathbb{N}.X^{2^n - 1}$, but it should be possible to show that it is not strictly positive following the argument used in Moggi et al. (1999) to show that the type of square matrices is not regular. 8 Relationships with Shapely Types In Jay and Cockett (1994) and Jay (1995) “shapely types” (in one parameter) in a category $\mathbb{C}$ are defined to be strong pullback preserving functors $\mathbb{C} \to \mathbb{C}$ equipped with a strong cartesian natural transformation to the list type, where the list type is the initial algebra $\mu Y.1 + X \times Y$. To see the relationship with containers, note that proposition 2.6.11 of Jacobs (1999) tells us that strong pullback preserving functors are in bijection with fibred pullback preserving functors, and similarly strong natural transformations between such functors correspond to fibred natural transformations. The next proposition will allow us to immediately observe that shapely types are containers. **Proposition 8.1.** Any functor $G \in [\mathbb{C}^I, \mathbb{C}]$ equipped with a cartesian natural transformation $\alpha : G \to T_F$ to a container functor is itself isomorphic to a container functor. **Proof.** Let $F \equiv (A \triangleright B)$ then $(\alpha_1, \text{id}_{\alpha_1^*B}) : (G1 \triangleright \alpha_1^*B) \to (A \triangleright B)$ is a cartesian map in $\mathcal{G}_f$; this yields a cartesian natural transformation $T_{G1 \triangleright \alpha_1^*B} \to T_{A \triangleright B}$. It now follows from the observation that each $\alpha_X$ makes $GX$ the pullback along $\alpha_1$ of the map $T_{A \triangleright B}X \to A$ that $G \cong T_{G1 \triangleright \alpha_1^*B}$ as required. □ Since the “list type” is given by the container $(n : \mathbb{N} \triangleright [n])$, it immediately follows (when $\mathbb{C}$ is locally cartesian closed) that every shapely type is a container functor. In the opposite direction, containers which are locally isomorphic to finite cardinals give rise to shapely types. To see this, we follow Johnstone (1977) and refer to the object $[-] \in \mathbb{C}/\mathbb{N}$, which can be constructed as the morphism $\mathbb{N} \times \mathbb{N} \to \mathbb{N}$ mapping $(n,m) \mapsto n + m + 1$, as the object of *finite cardinals* in $\mathbb{C}$. **Definition 8.2.** An object $A \vdash B$ is discretely finite iff there exists a morphism $u : A \to \mathbb{N}$ such that $B \cong u^*[-]$, i.e. each fibre $a : A \vdash B(a)$ is isomorphic to a finite cardinal. Say that a container $(A \triangleright B) \in \mathcal{G}_f$ is discretely finite iff each component $B_i$ for $i \in I$ is discretely finite. Note that “discretely finite” is strictly stronger than finitely presentable and other possible notions of finiteness. An immediate consequence of this definition is that the object of finite cardinals is a generic object for the category of discretely finite containers, and the following theorem relating shapely types and containers now follows as a corollary. **Theorem 8.3.** In a locally cartesian closed category with a natural number object the category of shapely functors and strong natural transformations is equivalent to the category of discretely finite containers. □ However, this paper tells us more about shapely types. In particular, containers show how to extend shapely types to cover coinductive types. Finally, the representation result for containers clearly translates into a representation result classifying the polymorphic functions between shapely types. It interesting to note that the “traversals” of Moggi et al. (1999) do not carry over to containers in general, for example the type $\mathbb{N} \Rightarrow X$ does not effectively traverse over the lifting monad $X \mapsto X + 1$. **References** M. Abbott, T. Altenkirch, N. Ghani, and C. McBride. Derivatives of containers. URL http://www.cs.nott.ac.uk/~txa/. Submitted for publication, 2003. A. Abel and T. Altenkirch. A predicative strong normalisation proof for a $\lambda$-calculus with interleaving inductive types. In *Types for Proof and Programs, TYPES ’99*, volume 1956 of *Lecture Notes in Computer Science*, 2000. J. Adámek and V. Koubek. Least fixed point of a functor. *Journal of Computer and System Sciences*, 19:163–178, 1979. J. Adámek and J. Rosický. *Locally Presentable and Accessible Categories*. Number 189 in London Mathematical Society Lecture Note Series. Cambridge University Press, 1994. T. Altenkirch and C. McBride. Generic programming within dependently typed programming. In *IFIP Working Conference on Generic Programming*, 2002. F. Borceux. *Handbook of Categorical Algebra 2*. Cambridge University Press, 1994. A. Carboni and P. Johnstone. Connected limits, familial representability and Artin glueing. *Math. Struct. in Comp. Science*, 5:441–459, 1995. R. Hasegawa. Two applications of analytic functors. *Theoretical Computer Science*, 272(1-2):112–175, 2002. M. Hofmann. Syntax and semantics of dependent types. In A. M. Pitts and P. Dybjer, editors, *Semantics and Logics of Computation*, volume 14, pages 79–130. Cambridge University Press, Cambridge, 1997. P. Hoogendijk and O. de Moor. Container types categorically. *Journal of Functional Programming*, 10(2):191–225, 2000. B. Jacobs. *Categorical Logic and Type Theory*. Number 141 in Studies in Logic and the Foundations of Mathematics. Elsevier, 1999. C. B. Jay. A semantics for shape. *Science of Computer Programming*, 25:251–283, 1995. C. B. Jay and J. R. B. Cockett. Shapely types and shape polymorphism. In *ESOP ’94: 5th European Symposium on Programming*, Lecture Notes in Computer Science, pages 302–316. Springer-Verlag, 1994. P. T. Johnstone. *Topos Theory*. Academic Press, 1977. C. McBride. The derivative of a regular type is its type of one-hole contexts. URL http://www.dur.ac.uk/c.t.mcbride/, 2001. E. Moggi, G. Bellè, and C. B. Jay. Monads, shapely functors and traversals. *Electronic Notes in Theoretical Computer Science*, 29, 1999. P. Taylor. *Practical Foundations of Mathematics*. Cambridge University Press, 1999.
SPECIAL RESOLUTION THAT the articles of association of the Company, as adopted on 29 March 2010 be affirmed, and, if and to the extent necessary, adopted as the Company’s articles, and that they be amended by the addition of the following new article numbered 17 “17 In addition to all powers conferred upon them and without detracting from the generality of their powers the Directors shall have the power to mortgage or charge the Company’s undertaking property and uncalled capital and to issue debentures, debenture stock and other securities as security for any debt, liability or obligation of the Company or of any third party.” Agreement to written resolution Please read the notes at the end of this document before indicating your agreement to the resolution. We, the undersigned, being persons entitled to vote on the above resolution on the Circulation Date, irrevocably agree to such resolution. | Name of corporate member | Sinclair Subsidiary No 1 Limited. | |--------------------------|----------------------------------| | Name and position of signatory | P. Williams Director - | | Signed by authorised person on behalf of corporate member | | | Date | 17 March 2014 | | Name of corporate member | William Sinclair Holdings plc | |--------------------------|-------------------------------| | Name and position of signatory | P. Williams Director. | | Signed by authorised person on behalf of corporate member | | | Date | 17 March 2014 | NOTES 1 If you wish to agree to the resolution, please complete the Agreement section above and return the completed document to the Company. 1 1 at its registered office by hand or by post, marked "For the attention of the Directors/Company Secretary, 1 2 by hand to the Chair of the Directors of the Company at the Registered Office. 2 Once you have signified your agreement to the resolution, you cannot revoke it. If you do not wish to agree to the resolutions, you do not have to do anything. Failure to respond will not be treated as agreement to the resolution. 3 If the Company has not received the necessary level of members' agreement to pass the resolution by the date falling 28 days from the Circulation Date, the resolution will lapse. The agreement of a member to a resolution is ineffective if signified after the expiry of that period. THE COMPANIES ACT 2006 PRIVATE COMPANY LIMITED BY SHARES ARTICLES OF ASSOCIATION OF WILLIAM SINCLAIR HORTICULTURE LIMITED (Adopted by special resolution passed on 29 March 2010) INTRODUCTION 1. INTERPRETATION 1.1 In these Articles, unless the context otherwise requires Act: means the Companies Act 2006, appointor: has the meaning given in article 11(1), Articles. means the company’s articles of association for the time being in force, business day: means any day (other than a Saturday, Sunday or public holiday in the United Kingdom) on which clearing banks in the City of London are generally open for business, Conflict: has the meaning given in article 7.1, eligible director: means a director who would be entitled to vote on the matter at a meeting of directors (but excluding any director whose vote is not to be counted in respect of the particular matter), and Model Articles: means the model articles for private companies limited by shares contained in Schedule 1 of the Companies (Model Articles) Regulations 2008 (SI 2008/3229) as amended prior to the date of adoption of these Articles 1.2 Save as otherwise specifically provided in these Articles, words and expressions which have particular meanings in the Model Articles shall have the same meanings in these Articles, subject to which and unless the context otherwise requires, words and expressions which have particular meanings in the Act shall have the same meanings in these Articles Headings in these Articles are used for convenience only and shall not affect the construction or interpretation of these Articles. A reference in these Articles to an "article" is a reference to the relevant article of these Articles unless expressly provided otherwise. Unless expressly provided otherwise, a reference to a statute, statutory provision or subordinate legislation is a reference to it as it is in force from time to time, taking account of (a) any subordinate legislation from time to time made under it, and (b) any amendment or re-enactment and includes any statute, statutory provision or subordinate legislation which it amends or re-enacts. Any phrase introduced by the terms "including", "include", "in particular" or any similar expression shall be construed as illustrative and shall not limit the sense of the words preceding those terms. The Model Articles shall apply to the company, except in so far as they are modified or excluded by these Articles. Articles 8, 9(1) and (3), 11(2) and (3), 13, 14(1), (2), (3) and (4), 17(2), 44(2), 49, 52 and 53 of the Model Articles shall not apply to the company. Article 7 of the Model Articles shall be amended by (a) the insertion of the words "for the time being" at the end of article 7(2)(a), and (b) the insertion in article 7(2) of the words "(for so long as he remains the sole director)" after the words "and the director may". Article 20 of the Model Articles shall be amended by the insertion of the words "and the secretary" before the words "properly incur". In article 25(2)(c) of the Model Articles, the words "evidence, indemnity and the payment of a reasonable fee" shall be deleted and replaced with the words "evidence and indemnity". Article 27(3) of the Model Articles shall be amended by the insertion of the words ", subject to article 10," after the word "But". Article 29 of the Model Articles shall be amended by the insertion of the words ", or the name of any person(s) named as the transferee(s) in an instrument of transfer executed under article 28(2)," after the words "the transmittee's name" Articles 31(1)(a) to (d) (inclusive) of the Model Articles shall be amended by the deletion, in each case, of the words "either" and "or as the directors may otherwise decide" DIRECTORS UNANIMOUS DECISIONS A decision of the directors is taken in accordance with this article when all eligible directors indicate to each other by any means that they share a common view on a matter. Such a decision may take the form of a resolution in writing, where each eligible director has signed one or more copies of it, or to which each eligible director has otherwise indicated agreement in writing. A decision may not be taken in accordance with this article if the eligible directors would not have formed a quorum at such a meeting. CALLING A DIRECTORS' MEETING Any director may call a directors' meeting by giving not less than business days' notice of the meeting (or such lesser notice as all the directors may agree) to the directors or by authorising the company secretary (if any) to give such notice. Notice of a directors' meeting shall be given to each director in writing. QUORUM FOR DIRECTORS' MEETINGS Subject to article 4.2, the quorum for the transaction of business at a meeting of directors is any two eligible directors. For the purposes of any meeting (or part of a meeting) held pursuant to article 7 to authorise a director's conflict, if there is only one eligible director in office other than the conflicted director(s), the quorum for such meeting (or part of a meeting) shall be one eligible director. If the total number of directors in office for the time being is less than the quorum required, the directors must not take any decision other than a decision. (a) to appoint further directors, or (b) to call a general meeting so as to enable the shareholders to appoint further directors 5. CASTING VOTE 5.1 If the numbers of votes for and against a proposal at a meeting of directors are equal, the chairman or other director chairing the meeting has a casting vote. 5.2 Article 5.1 shall not apply in respect of a particular meeting (or part of a meeting) if, in accordance with the Articles, the chairman or other director is not an eligible director for the purposes of that meeting (or part of a meeting). 6 TRANSACTIONS OR OTHER ARRANGEMENTS WITH THE COMPANY Subject to sections 177(5) and 177(6) and sections 182(5) and 182(6) of the Act and provided he has declared the nature and extent of his interest in accordance with the requirements of the Companies Acts, a director who is in any way, whether directly or indirectly, interested in an existing or proposed transaction or arrangement with the company (a) may be a party to, or otherwise interested in, any transaction or arrangement with the company or in which the company is otherwise (directly or indirectly) interested, (b) shall be an eligible director for the purposes of any proposed decision of the directors (or committee of directors) in respect of such contract or proposed contract in which he is interested, (c) shall be entitled to vote at a meeting of directors (or of a committee of the directors) or participate in any unanimous decision, in respect of such contract or proposed contract in which he is interested, (d) may act by himself or his firm in a professional capacity for the company (otherwise than as auditor) and he or his firm shall be entitled to remuneration for professional services as if he were not a director, (e) may be a director or other officer of or employed by, or a party to a transaction or arrangement with, or otherwise interested in, any body corporate in which the company is otherwise (directly or indirectly) interested, and (f) subject to the proviso to this clause 6 shall be accountable to the company for any benefit which he (or a person connected with him (as defined in section 252 of the Act)) derives from any such contract, transaction or arrangement or from any such office or employment or from any interest in any such body corporate. Provided that the directors shall not be accountable to the Company for any such benefit as specified in subparagraph (f) hereof, where the interests of the director in the contract, transaction, arrangement, office, employment or otherwise derives from the fact that he is a shareholder and/or a director and/or an employee of any company within the same group of companies as the Company (whether a holding company, a subsidiary company or otherwise) 7 DIRECTORS’ CONFLICTS OF INTEREST 7 1 The directors may, in accordance with the requirements set out in this article, authorise any matter or situation proposed to them by any director which would, if not authorised, involve a director (an Interested Director) breaching his duty under section 175 of the Act to avoid conflicts of interest (Conflict) 7 2 Any authorisation under this article 7 will be effective only if (a) to the extent permitted by the Act, the matter in question shall have been proposed by any director for consideration in the same way that any other matter may be proposed to the directors under the provisions of these Articles or in such other manner as the directors may determine, (b) any requirement as to the quorum for consideration of the relevant matter is met without counting the Interested Director, and (c) the matter was agreed to without the Interested Director voting or would have been agreed to if the Interested Director’s vote had not been counted 7 3 Any authorisation of a Conflict under this article 7 may (whether at the time of giving the authorisation or subsequently) (a) extend to any actual or potential conflict of interest which may reasonably be expected to arise out of the matter or situation so authorised, (b) provide that the Interested Director be excluded from the receipt of documents and information and the participation in discussions (whether at meetings of the directors or otherwise) related to the Conflict, (c) provide that the Interested Director shall or shall not be an eligible director in respect of any future decision of the directors in relation to any resolution related to the Conflict, (d) impose upon the Interested Director such other terms for the purposes of dealing with the Conflict as the directors think fit, (e) provide that, where the Interested Director obtains, or has obtained (through his involvement in the Conflict and otherwise than through his position as a director of the company) information that is confidential to a third party, he will not be obliged to disclose that information to the company, or to use it in relation to the company’s affairs where to do so would amount to a breach of that confidence, and (f) permit the Interested Director to absent himself from the discussion of matters relating to the Conflict at any meeting of the directors and be excused from reviewing papers prepared by, or for, the directors to the extent they relate to such matters. Where the directors authorise a Conflict, the Interested Director will be obliged to conduct himself in accordance with any terms and conditions imposed by the directors in relation to the Conflict. The directors may revoke or vary such authorisation at any time, but this will not affect anything done by the Interested Director, prior to such revocation or variation, in accordance with the terms of such authorisation. A director is not required, by reason of being a director (or because of the fiduciary relationship established by reason of being a director), to account to the company for any remuneration, profit or other benefit which he derives from or in connection with a relationship involving a Conflict which has been authorised by the directors or by the company in general meeting (subject in each case to any terms, limits or conditions attaching to that authorisation) and no contract shall be liable to be avoided on such grounds. 8. **RECORDS OF DECISIONS TO BE KEPT** Where decisions of the directors are taken by electronic means, such decisions shall be recorded by the directors in permanent form, so that they may be read with the naked eye. 9. **NUMBER OF DIRECTORS** Unless otherwise determined by ordinary resolution, the number of directors (other than alternate directors) shall not be subject to any maximum but shall not be less than two. 10. **APPOINTMENT OF DIRECTORS** In any case where, as a result of death or bankruptcy, the company has no shareholders and no directors, the transmittee(s) of the last shareholder to have died or to have a bankruptcy order made against him (as the case may be) have the right, by notice in writing, to appoint a natural person (including a transmittee who is a natural person), who is willing to act and is permitted to do so, to be a director. 11. **SECRETARY** The directors may appoint any person who is willing to act as the secretary for such term, at such remuneration and upon such conditions as they may think fit and from time to time remove such person and, if the directors so decide, appoint a replacement, in each case by a decision of the directors. **DECISION MAKING BY SHAREHOLDERS** 12. **POLL VOTES** 12.1 A poll may be demanded at any general meeting by any qualifying person (as defined in section 318 of the Act) present and entitled to vote at the meeting. 12.2 Article 44(3) of the Model Articles shall be amended by the insertion of the words "A demand so withdrawn shall not invalidate the result of a show of hands declared before the demand was made" as a new paragraph at the end of that article. 13. **PROXIES** 13.1 Article 45(1)(d) of the Model Articles shall be deleted and replaced with the words "is delivered to the company in accordance with the Articles not less than 48 hours before the time appointed for holding the meeting or adjourned meeting at which the right to vote is to be exercised and in accordance with any instructions contained in the notice of the general meeting (or adjourned meeting) to which they relate". 13.2 Article 45(1) of the Model Articles shall be amended by the insertion of the words "and a proxy notice which is not delivered in such manner shall be invalid unless the directors, in their discretion, accept the notice at any time before the meeting" as a new paragraph at the end of that article. **ADMINISTRATIVE ARRANGEMENTS** 14. **MEANS OF COMMUNICATION TO BE USED** 14.1 Any notice, document or other information shall be deemed served on or delivered to the intended recipient (a) if properly addressed and sent by prepaid United Kingdom first class post to an address in the United Kingdom, 48 hours after it was posted (or five business days after posting either to an address outside the United Kingdom or from outside the United Kingdom to an address within the United Kingdom, if (in each case) sent by reputable international overnight courier addressed to the intended recipient, provided that delivery in at least five business days was guaranteed at the time of sending and the sending party receives a confirmation of delivery from the courier service provider), (b) if properly addressed and delivered by hand, when it was given or left at the appropriate address, (c) if properly addressed and sent or supplied by electronic means, one hour after the document or information was sent or supplied, and (d) if sent or supplied by means of a website, when the material is first made available on the website or (if later) when the recipient receives (or is deemed to have received) notice of the fact that the material is available on the website. For the purposes of this article, no account shall be taken of any part of a day that is not a working day. 14 2 In proving that any notice, document or other information was properly addressed, it shall be sufficient to show that the notice, document or other information was delivered to an address permitted for the purpose by the Act. 15 INDEMNITY 15 1 Subject to article 15 2, but without prejudice to any indemnity to which a relevant officer is otherwise entitled (a) each relevant officer shall be indemnified out of the company's assets against all costs, charges, losses, expenses and liabilities incurred by him as a relevant officer (i) in the actual or purported execution and/or discharge of his duties, or in relation to them, and (ii) in relation to the company's (or any associated company's) activities as trustee of an occupational pension scheme (as defined in section 235(6) of the Act), including (in each case) any liability incurred by him in defending any civil or criminal proceedings, in which judgment is given in his favour or in which he is acquitted or the proceedings are otherwise disposed of without any finding or admission of any material breach of duty on his part or in connection with any application in which the court grants him, in his capacity as a relevant officer, relief from liability for negligence, default, breach of duty or breach of trust in relation to the company's (or any associated company's) affairs, and (b) the company may provide any relevant officer with funds to meet expenditure incurred or to be incurred by him in connection with any proceedings or application referred to in article 15(1)(a) and otherwise may take any action to enable any such relevant officer to avoid incurring such expenditure 15 2 This article does not authorise any indemnity which would be prohibited or rendered void by any provision of the Companies Acts or by any other provision of law 15 3 In this article (a) companies are associated if one is a subsidiary of the other or both are subsidiaries of the same body corporate, and (b) a "relevant officer" means any director or other officer of the company or an associated company (including any company which is a trustee of an occupational pension scheme (as defined by section 235(6) of the Act), but excluding in each case any person engaged by the company (or associated company) as auditor (whether or not he is also a director or other officer), to the extent he acts in his capacity as auditor) 16 INSURANCE 16 1 The directors may decide to purchase and maintain insurance, at the expense of the company, for the benefit of any relevant officer in respect of any relevant loss 16 2 In this article (a) a "relevant officer" means any director or other officer of the company or an associated company (including any company which is a trustee of an occupational pension scheme (as defined by section 235(6) of the Act), but excluding in each case any person engaged by the company (or associated company) as auditor (whether or not he is also a director or other officer), to the extent he acts in his capacity as auditor), (b) a "relevant loss" means any loss or liability which has been or may be incurred by a relevant officer in connection with that relevant officer's duties or powers in relation to the company, any associated company or any pension fund or employees' share scheme of the company or associated company, and (c) companies are associated if one is a subsidiary of the other or both are subsidiaries of the same body corporate 17 In addition to all powers conferred upon them and without detracting from the generality of their powers the Directors shall have the power to mortgage or charge the Company’s undertaking property and uncalled capital and to issue debentures, debenture stock and other securities as security for any debt, liability or obligation of the Company or of any third party.
inappropriately high compared with plasma osmolality, but vasopressin levels are below the limits of RIA detection in a certain portion of the patients (as much as 10 to 20%). The existence of other antidiuretic substances in plasma has been postulated. Oxytocin may fit this profile nicely. A few studies have shown elevated levels of plasma oxytocin in patients with small-cell lung cancer; however, these increments were usually accompanied by concomitant increases in vasopressin. At present, no report has suggested that oxytocin alone produces SIADH in clinical cases, but this may be attributable to the lack of a reliable RIA for oxytocin in clinical settings. Related to this topic, the nephrogenic syndrome of inappropriate antidiuresis (NSIAD) is caused by a gain-of-function mutation of V2R. In this disease, endogenous vasopressin is completely suppressed while antidiuresis persists. The symptoms of disease start from childhood, but similar mutations seem to explain some sporadic episodes of SIADH in adults. How should we differentiate oxytocin-induced SIADH from NSIAD? Oxytocin-induced SIADH will respond to V2R antagonists as illustrated by Li et al., whereas patients with NSIAD are unable to respond to V2R antagonists. Clinicians would thus be well advised to note that oxytocin has antidiuretic activity and contributes to hyponatremia in certain clinical settings and that V2R antagonists may be useful in the differential diagnosis and treatment of inappropriate antidiuresis. DISCLOSURES None. REFERENCES 1. Sausville E, Carney D, Battye J: The human vasopressin gene is linked to the oxytocin gene and is selectively expressed in a cultured lung cancer cell line. *J Biol Chem* 260: 10236–10241, 1985 2. Chini B, Manning M: Agonist selectivity in the oxytocin/vasopressin receptor family: New insights and challenges. *Biochem Soc Trans* 35: 737–741, 2007 3. Birnbaumer M, Seibold A, Gilbert S, Ishido M, Barberis C, Antaramian A, Brabet P, Rosenthal W: Molecular cloning of the receptor for human antidiuretic hormone. *Nature* 357: 333–335, 1992 4. Fushimi K, Uchida S, Hara Y, Hirata Y, Marumo F, Sasaki S: Cloning and expression of apical membrane water channel of rat kidney collecting tubule. *Nature* 361: 549–552, 1993 5. Ishikawa SE, Schrier RW: Pathophysiological roles of arginine vasopressin and aquaporin-2 in impaired water excretion. *Clin Endocrinol* 58: 1–17, 2003 6. Sasaki S, Noda Y: Aquaporin-2 protein dynamics within the cell. *Curr Opin Nephrol Hypertens* 16: 348–352, 2007 7. Pittman JG: Water intoxication due to oxytocin. *N Engl J Med* 268: 481–482, 1963 8. Potter RR: Water retention due to oxytocin. *Obstet Gynecol* 23: 699–702, 1964 9. Chou CL, DiGiovanni SR, Mejia R, Nielsen S, Knepper MA: Oxytocin as an antidiuretic hormone. I. Concentration dependence of action. *Am J Physiol* 269: F70–F77, 1995 10. Li C, Wang W, Summer SN, Westfall TD, Brooks DP, Falk S, Schrier RW: Molecular mechanisms of antidiuretic effect of oxytocin. *J Am Soc Nephrol* 19: 225–232, 2008 11. Verbalis JG, Goldsmith SR, Greenberg A, Schrier RW, Sterns RH: Hyponatremia treatment guideline 2007: Expert panel recommendations. *Am J Med* 120: S1–S21, 2007 12. Robertson GL: Regulation of arginine vasopressin in the syndrome of inappropriate antidiuresis. *Am J Med* 119: S36–S42, 2006 13. North WG, Friedmann AS, Yu X: Tumor biogenesis of vasopressin and oxytocin. *Ann N Y Acad Sci* 689: 107–121, 1993 14. Feldman BJ, Rosenthal SM, Vargas GA, Fenwick RG, Huang EA, Matsuda-Abedini M, Lustig RH, Mathias RS, Portale AA, Miller WL, Gitelman SE: Nephrogenic syndrome of inappropriate antidiuresis. *N Engl J Med* 352: 1884–1890, 2005 15. Decaux K, Vanderghyest F, Bouko Y, Parma J, Vassart G, Vilain C: Nephrogenic syndrome of inappropriate antidiuresis in adults: High phenotypic variability in men and women from a large pedigree. *J Am Soc Nephrol* 18: 606–612, 2007 See related article, “Molecular Mechanisms of Antidiuretic Effect of Oxytocin,” on pages 225–232. --- **Podocyte-Specific Gene Mutations Are Coming of Age** Peter W. Mathieson Academic Renal Unit, University of Bristol, Bristol, United Kingdom *J Am Soc Nephrol* 19: 190–191, 2008. doi: 10.1681/ASN.2007121341 Major leaps have been made recently in the understanding of the cause of proteinuria and hence in the regulation of glomerular permeability in health. Progress has been fueled by the description of single-gene mutations, the majority of which affect genes expressed selectively in the podocyte, resulting in nephrotic syndrome in human and mouse. This has placed the podocyte center stage as a key regulator of normal selective permeability to albumin in the glomerular capillary wall, although we should not forget that single-gene mutations affecting components of the glomerular basement membrane can also result in heavy proteinuria, or that the third component of the glomerular capillary wall, the glomerular endothelial cell, can also play an important role in regulating glomerular permeability in health and disease. The first podocyte-specific gene identified by studying disease-associated mutations was *NPHS1*, encoding nephrin; this was swiftly followed by identification of *NPHS2*, encoding podocin, also a novel protein important in the structure and function of... Since the discovery of nephrin and podocin, several other disease-associated podocyte-specific gene defects have been reported, and, undoubtedly, there will be more to come. Mutations in different podocyte genes or different mutations in the same gene result in varying phenotypes regarding severity and age of onset of proteinuria, and it is clear that there are likely to be other disease-modifying genes or environmental influences. Moreover, congenital forms of nephrotic syndrome are rare, and a question that intrigues nephrologists and basic scientists alike is whether the more common forms of sporadic, often later onset nephrotic syndrome could also be associated with mutations or polymorphisms in podocyte-specific genes, as predisposing factors or contributors to a complex etiology involving genetic–environmental interactions. If so, then study of these genes could be clinically useful in diagnosis and prognosis, especially concerning the likelihood of corticosteroid responsiveness and the issue of likely recurrence in renal transplants for patients who progress to end-stage renal failure. The article in this issue of *JASN* by Hinkes *et al.*\(^7\) the product of an impressive multinational collaboration, sheds light on these issues. The study amassed 430 patients with steroid-resistant nephrotic syndrome, the vast majority of whom were the only affected family member, although the series did include 23 families with more than one affected member. The patients were screened for mutations in *NPHS2* by direct sequencing of all eight exons of the gene. Eighty-two patients (19% of the total) had mutations in *NPHS2*. In the families with more than one affected member, the proportion with *NPHS2* mutations rose to 39%. In patients with two *NPHS2* mutations, the authors report that approximately 40% had one truncating (frameshift or nonsense) mutation and an additional 30% had homozygous *R1308Q* mutations (the “founder” *NPHS2* mutation identified by Boute *et al.*\(^5\)). These two groups of individuals nearly all developed nephrotic syndrome at an early age (<6 yr, with a mean age of onset <2 yr). The remaining 30% of patients with other mutations or variants in *NPHS2* had later onset disease without any further specific link between any given genotype and age of onset (although the numbers of patients with each genotype were small). Mutation type did not affect rate of deterioration, time from onset to ESRD being the same in all groups. Although this represents real progress, even within the groups with early presentation there was still a wide range of age of onset. Also, >80% of the collection with steroid-responsive nephrotic syndrome did not have any abnormality of *NPHS2*, so their proteinuria remains unexplained; clearly there is more work to be done. The power of large multinational studies such as this one will be essential if analyses of genotype–phenotype relationships in nephrotic syndrome are to yield informative conclusions. Ideally, genetic analysis should be more widely available as a diagnostic and prognostic aid in patients presenting with nephrotic syndrome; however, at present, clinicians will need further guidance from geneticists about the interpretation of genotype–phenotype relationships. Hinkes *et al.* are to be congratulated for leading the way. **DISCLOSURES** None. **REFERENCES** 1. Tryggvason K, Patrakka J, Wartiovaara J: Hereditary proteinuria syndromes and mechanisms of proteinuria. *N Engl J Med* 354: 1387–1401, 2006 2. Zenker M, Aigner T, Wendler O, Tralau T, Müntefering H, Fenski R, Pitz S, Schumacher V, Royer-Pokora B, Wühl E, Cochat P, Bouvier R, Kraus C, Mark K, Madlon H, Dötsch J, Rascher W, Maruniak-Chudek I, Lennert T, Neumann LM, Reis A: Human laminin beta2 deficiency causes congenital nephrosis with mesangial sclerosis and distinct eye abnormalities. *Hum Mol Genet* 13: 2625–2632, 2004 3. Ballermann BJ: Contribution of the endothelium to the glomerular permselectivity barrier in health and disease. *Nephron Physiol* 106: 19–25, 2007 4. Kestila M, Lenkkeri U, Mannikko M, Lamerdin J, McCready P, Putaala H, Ruotsalainen V, Morita T, Nissinen M, Herva R, Kashtan CE, Peltonen L, Holmberg C, Olsen A, Tryggvason K: Positionally cloned gene for a novel glomerular protein—nephrin—is mutated in congenital nephrotic syndrome. *Mol Cell* 4: 575–582, 1998 5. Boute N, Gribouval O, Roselli S, Benessy F, Lee H, Fuchshuber A, Dahan K, Gubler MC, Niaudet P, Antignac C: NPHS2, encoding the glomerular protein podocin, is mutated in autosomal recessive steroid-resistant nephrotic syndrome. *Nat Genet* 24: 349–354, 2000 6. Huber TB, Schermer B, Benzing T: Podocin organizes ion channel-lipid supercomplexes: Implications for mechanosensation at the slit diaphragm. *Nephron Exp Nephrol* 106: e27–e31, 2007 7. Hinkes B, Vlangos C, Heeringa S, Mucha B, Gbadegesin R, Liu J, Hasselbacher K, Ozaltin F, Hildebrandt F, members of the APN Study Group: Specific podocin mutations correlate with age of onset in steroid-resistant nephrotic syndrome. *J Am Soc Nephrol* 19:365–371, 2008 See related article, “Specific Podocin Mutations Correlate with Age of Onset in Steroid-Resistant Nephrotic Syndrome,” on pages 365–371. --- **The Disadvantage of Being Fat** Roberto S. Kalil and Lawrence G. Hunsicker Department of Medicine, Roy and Lucille Carver School of Medicine, Iowa City, Iowa *J Am Soc Nephrol* 19: 191–193, 2008. doi: 10.1681/ASN.2007121337 Given the epidemic of obesity in the United States, it is not surprising that an increasing fraction of patients who are considered for and receiving kidney transplants are also overweight. Friedman *et al.*\(^1\) found a 41.9% decrease in the fraction...
LQAS: User Beware Dale A Rhoda,1,* Soledad A Fernandez,1 David J Fitch2 and Stanley Lemeshow1 Accepted 10 December 2008 Background Researchers around the world are using Lot Quality Assurance Sampling (LQAS) techniques to assess public health parameters and evaluate program outcomes. In this paper, we report that there are actually two methods being called LQAS in the world today, and that one of them is badly flawed. Methods This paper reviews fundamental LQAS design principles, and compares and contrasts the two LQAS methods. We raise four concerns with the simply-written, freely-downloadable training materials associated with the second method. Results The first method is founded on sound statistical principles and is carefully designed to protect the vulnerable populations that it studies. The language used in the training materials for the second method is simple, but not at all clear, so the second method sounds very much like the first. On close inspection, however, the second method is found to promote study designs that are biased in favor of finding programmatic or intervention success, and therefore biased against the interests of the population being studied. Conclusion We outline several recommendations, and issue a call for a new high standard of clarity and face validity for those who design, conduct, and report LQAS studies. Keywords Lot quality assurance sampling, quality assurance, healthcare, sampling studies, evaluation studies, intervention studies, prevalence, immunization Background In a recent review, Robertson and Valadez reported that Lot Quality Assurance Sampling (LQAS) techniques were used in more than 800 health-related surveys between 1984 and 2004, mostly in developing countries.1 LQAS is supposed to provide a rapid and inexpensive estimate of the prevalence of a specific condition such as a malady or a successful intervention. The topics investigated in the studies described in Robertson and Valadez1 were as diverse as immunization coverage, post-disaster public health, neonatal tetanus mortality and service delivery quality management. In this article, we report that there are actually 2 methods being called LQAS in the world today, and that one of them is badly flawed. The first method, which we review and endorse, is founded on sound statistical principles and is carefully designed to protect the vulnerable populations that it studies. It poses a null hypothesis that the malady is widespread or that the intervention has not been successful, and only rejects that null in the face of strong evidence.2–3 In recent years, the first method has been overshadowed by a second approach, which sounds very much like the first, but reverses the role of the null and alternative hypotheses. Rather than protect the population at risk, it poses a null hypothesis that the population is healthy or that an intervention has been successful, and then accepts the null unless there is overwhelming evidence... to reject it. Accepting a null hypothesis is always a statistical error. Simply put, the second method is biased toward concluding that interventions have reached their goals before they actually do. According to Robertson and Valadez,\textsuperscript{1} there were fewer than 50 LQAS surveys being reported per year before 1999, but the number climbed to more than 200 surveys in 2004. They suggest that one factor in the expanded use of LQAS is the ‘availability of practical manuals and guidelines’ and they cite ‘difficult-to-understand statistical explanations that were not helpful to public health professionals interested in field applications’ as one early ‘impediment in applying the method’. The second method is taught using some ‘practical manuals and guidelines’ that are freely available on the Internet. Besides reversing the traditional direction of LQAS hypothesis tests, the manuals may lead trainees to believe that small sample techniques are much more powerful than they actually are. They report very low error rates based on a complicated and unstated definition of ‘error’ rather than the simple definition that trainees are likely to infer from the over-simplified materials. While we applaud the work that has gone into developing practical manuals and the effort to make them available at low cost, we are alarmed that some important LQAS principles have been lost along the way. We fear that faulty LQAS conclusions may be used to deny interventions or preventative services to people who desperately need them. This article compares and contrasts the two LQAS methods and concludes with recommendations for those who design, carry out and report LQAS studies. **An overview of LQAS** Health ministries and international development organizations are often interested in estimating the prevalence of certain conditions or characteristics. These might include: - prevalence of a disease or health condition, - proportion of the target population that has received an intervention, - proportion of the population that knows a risk-related fact (e.g. AIDS can be transmitted through sexual contact), - proportion of mothers trained to properly mix oral rehydration solution. In this article, we use the example of estimating the proportion of the population who have received a particular vaccination. The health ministry may wish to accomplish the following two goals. (1) Estimate the overall population proportion vaccinated for an entire region. (2) Identify smaller districts within the region that have especially high or especially low proportions. Those with low proportions of vaccination may require special interventions. Those with high proportions, might not need special intervention any longer. Furthermore, those with high proportions might serve as models of ‘best practices’. If the inquiring agency were able to allocate unlimited resources to the task of evaluation, then both goals could be met using a census or using large sample surveys in each district. Where resources are limited, however, it is not always possible to obtain precise estimates of both the regional proportion and the individual district proportions. The LQAS solution to this problem is to perform small sample studies in each district and then aggregate the results to estimate the regional proportion. LQAS studies use sample sizes on the order of dozens per district rather than hundreds, so the confidence interval for each district proportion is very large. When the estimates from multiple districts are pooled, the straightforward formula for the estimate of population proportion from a stratified sample yields a precise regional estimate from imprecise district estimates. Although confidence intervals for individual districts are not especially informative, the study organizers often wish to identify the districts whose proportion exceeds a particular threshold. At the district level, LQAS may be understood as a straightforward application of a binomial hypothesis test. We believe that the process of designing an LQAS study should include the following. (1) Select $P_0$, the proportion threshold of interest and construct the null hypothesis. It is traditional to assume that the population is not healthy, or not being served adequately and to only reject that assumption in the face of strong evidence to the contrary. In the vaccination example, the null hypothesis is that the proportion of persons vaccinated in the district, $P_d \leq P_0$. (2) Select an acceptable upper bound for the probability of type I error ($\alpha$). A type I error would occur if the investigator concluded that $P_d > P_0$ when, in fact, it is not. (3) Select $P_2$, a second proportion threshold, for the purposes of specifying either the power of the test $(1-\beta)$, or the probability of type II error ($\beta$). A type II error occurs anytime the investigator fails to reject the null hypothesis when, in fact, $P_d > P_0$. Select an acceptable value of $\beta$ for the probability of type II error ($\beta$) if the true district proportion $P_d > P_2$. (4) Use an LQAS table (e.g. from Lemeshow and Taber\textsuperscript{3}) to determine which combinations of sample size ($n$) and decision threshold ($d^*$) will provide tests that meet the type I and type II constraints for $P_0$ and $P_2$. (5) Randomly sample $n$ individuals from each district. If at least $d^*$ of the sampled individuals have been served, then the investigator has strong evidence to conclude that $P_d > P_0$. Otherwise, the investigator fails to reject the null hypothesis that $P_d \leq P_0$. (6) Combine the counts from individual districts to compute an aggregate prevalence and confidence interval for the entire region. If the figures from individual districts vary widely, the average prevalence may not be very meaningful. In that case, it might be helpful to report the range of figures from the districts. For the purpose of clarity in this article, we assume that higher proportions indicate intervention success, as would be true with the prevalence of vaccination. If the issue at hand is prevalence of a malady rather than an intervention, then of course lower proportions will indicate intervention success. In that case, we can conceive of a test where higher proportions are good news by estimating the proportion of persons who do not have the malady rather than the proportion who do. **Example** Suppose a health administrator wishes to estimate the proportion of the region that has received a particular vaccination and to identify those districts where she can be confident that $P_d > 50\%$. She might set $P_0$ to be 50%, and $\alpha = 10\%$. The null hypothesis is that $P_d \leq 50\%$. She might choose to control the type II error rate such that $\beta = 10\%$ at $P_2 = 80\%$. The rejection criterion will have less than 10% probability of failing to reject the null hypothesis when $P_d > 80\%$ and <10% probability of rejecting the null when $P_d \leq 50\%$. The values $n = 19$ and $d^* = 13$ satisfy these criteria. If 13 or more vaccinated persons are found in a district’s sample, then the administrator rejects the null hypothesis and concludes confidently that $P_d > 50\%$. Otherwise, she fails to reject the null hypothesis. (Note that the exact 90% lower confidence bound for a proportion given 13 successes in 19 trials falls above 50%, whereas the lower 90% confidence bound given 12 successes in 19 trials falls below 50%). **Features of LQAS study designs** Several features of LQAS study designs warrant careful attention. **LQAS designs may be summarized with operating characteristic curves** Figure 1 shows the operating characteristic curve of the $n = 19$, $d^* = 13$ LQAS design. The abscissa represents $P_d$, the true proportion of vaccinated persons in the district. The height of the curve indicates the probability of obtaining 13 or more successes in 19 independent trials at each value of $P_d$. When $P_d = P_0$, the curve attains a height of $\alpha$, the maximum probability of rejecting the null hypothesis if it is true. Note that $\alpha < 10\%$ at $P_d = 50\%$ for this curve. Any value of $P_d$ that is $> P_0$ could be selected for $P_2$. At those points, the height of the curve represents $1 - \beta$, or the power of the study design to reject the null hypothesis. Note that when $P_d = 80\%$, $1 - \beta > 90\%$ which means that $\beta < 10\%$, so this design meets the criteria listed above. Because of the discrete nature of the binomial distribution, only discrete values of $\alpha$ and $\beta$ will be possible with LQAS designs. When $n$ or $d^*$ changes, the achievable values of $\alpha$ and $\beta$ change discretely. In order to achieve $\alpha = \beta = 10\%$ exactly at $P_0$ and $P_2$, a very large value for $n$ would be necessary. It is customary to choose values for $\alpha$ and $\beta$ and then select combinations of $n$ and $d^*$ that provide error rates no larger than, but sometimes smaller than, $\alpha$ at $P_0$ and $\beta$ at $P_2$. **LQAS designs can, and should, protect vulnerable populations** In some situations, administrators may use LQAS results to allocate resources. They might devote extra resources to districts that do not show evidence of having reached $P_0$, and they might shift resources away from districts that appear to have crossed that threshold. This makes the direction of the null hypothesis very important. To see why this is so, consider the implications of type I and type II errors for the population under study. When the null assumes that $P_d \leq P_0$, type I errors mean that resources are mistakenly withdrawn from districts that have not yet reached $P_0$. Fortunately, we set a low value for $\alpha$, so the administrator will rarely withdraw resources from needy districts. Type II errors occur when $P_d$ is above $P_0$ and the administrator continues to devote resources to districts that have already reached $P_0$. This may be an inefficient use of resources, but it does not endanger the population as clearly as a type I error does. We see in Figure 1 that the decision rule has low power for rejecting the null hypothesis when $P_d$ is between 50% and 80%, so we expect type II errors... Figure 2 With a well-constructed null hypothesis, type II errors are preferable to type I errors to be common when $P_d$ is in that range. Indeed the probability of type II error is as high as $1-\alpha$ when $P_d$ is just above $P_0$. Figure 2 indicates that the vulnerable population will prefer common errors of type II to rare errors of type I. On the other hand, if the null hypothesis states that the district is being adequately served, ($P_d \geq P_0$), then the decision rule will require strong evidence to conclude otherwise. The sample proportion will need to be quite a bit smaller than $P_0$ to conclude that $P_d < P_0$. Rare type I errors will devote extra resources to districts that do not need them, and common type II errors will withdraw resources from needy districts. This design is biased against the persons being studied and biased in favour of finding that the intervention has reached its goals. We feel strongly that reversing the null hypothesis in this way is a disservice to the population at risk. In many cases, the people being studied with LQAS are living in poverty. Economic, political and environmental circumstances may be stacked against their odds of living a healthy life. We feel strongly that the LQAS study design should not be stacked against them, too. Therefore, the null hypothesis should be constructed to assume that the people are not healthy or not well served, and the study design should require strong evidence to conclude otherwise. $P_0$ should be selected carefully, and a small value should be chosen for $\alpha$. Note that in our example, $(d^*)/n = 13/19 = 0.684$ or 68.4%. This design is conservative in that it assumes that the proportion of persons who have been vaccinated is $\leq 50\%$ and it only rejects that null hypothesis if 68.4% or more of the persons sampled have been vaccinated. The design requires strong evidence to conclude that the vaccination programme has reached the threshold of 50%. When using small values of $n$, LQAS designs have low power Recall that the height of the operating characteristic curve represents the probability of rejecting the null hypothesis. When $P_d$ is larger than, but near $P_0$, the $n=19$, $d^*=13$ rule has very low power. When $P_d=60\%$, the power is only 30%, so there is a 70% chance of making a type II error. When $P_d=70\%$, there is a 33% chance of making a type II error. One method of addressing this problem is to choose an LQAS design with larger values for $n$ and $d^*$. Figure 3 shows the operating characteristic curves for three designs where $P_0=50\%$ and $\alpha=10\%$. Higher values of $n$ and $d^*$ result in designs that are more powerful for rejecting the null hypothesis at values of $P_d > P_0$. Another way to address the problem is to use a double-sampling design that surveys additional persons if the sample proportion from the first $n$ individuals is too close to $P_0$ to draw a confident conclusion.\(^3\) Some problems evident in LQAS training materials In 2002, Valadez and Devkota published a table entitled ‘Optimal LQAS Decision Rules for Sample Sizes of 12–30 and Coverage Benchmarks or Average Coverage of 20%–95%’.\(^4\) That table is reproduced here as Table 1. The table has subsequently been used to train numerous people in LQAS techniques. It appeared in Valadez et al.,\(^5\) with small differences in the footnote and title wording. More recently, it appeared in training materials that have been made freely available on the Internet.\(^6–8\) Bearing in mind the features of LQAS designs that were articulated above, we have several grave concerns with Table 1 and with the LQAS designs and training materials that are based upon it. Concern 1: The null hypotheses in Table 1 are biased against vulnerable populations This is our most serious concern. The table and its associated training materials avoid statistical jargon so they never state a null hypothesis, per se, but we can infer what the null must be by looking at the thresholds in the top row of the table and the sample Table 1 LQAS Table from Valadez et al.\textsuperscript{8} LQAS table: decision rules for sample sizes of 12–30 and coverage targets/average at 10%–95% | Sample Size | 10% | 15% | 20% | 25% | 30% | 35% | 40% | 45% | 50% | 55% | 60% | 65% | 70% | 75% | 80% | 85% | 90% | 95% | |-------------|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----| | 12 | NA | NA | 1 | 1 | 2 | 2 | 3 | 4 | 5 | 5 | 6 | 7 | 7 | 8 | 8 | 9 | 10 | 11 | | 13 | NA | NA | 1 | 1 | 2 | 2 | 3 | 3 | 4 | 5 | 6 | 6 | 7 | 8 | 8 | 9 | 10 | 11 | 11 | | 14 | NA | NA | 1 | 1 | 2 | 2 | 3 | 3 | 4 | 4 | 5 | 6 | 7 | 8 | 8 | 9 | 10 | 11 | 11 | 12 | | 15 | NA | NA | 1 | 2 | 2 | 2 | 3 | 4 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 10 | 11 | 11 | 12 | 13 | | 16 | NA | NA | 1 | 2 | 2 | 2 | 3 | 4 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 14 | | 17 | NA | NA | 1 | 2 | 2 | 2 | 3 | 4 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | | 18 | NA | NA | 1 | 2 | 2 | 2 | 3 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 11 | 12 | 13 | 14 | 16 | | 19 | NA | NA | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 16 | | 20 | NA | NA | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 17 | | 21 | NA | NA | 1 | 2 | 3 | 4 | 5 | 6 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 16 | 17 | 18 | 18 | | 22 | NA | NA | 1 | 2 | 3 | 4 | 5 | 7 | 8 | 9 | 10 | 12 | 13 | 14 | 15 | 16 | 18 | 19 | 19 | | 23 | NA | NA | 1 | 2 | 3 | 4 | 6 | 7 | 8 | 10 | 11 | 12 | 13 | 14 | 16 | 17 | 18 | 20 | 20 | | 24 | NA | NA | 1 | 2 | 3 | 4 | 6 | 7 | 9 | 10 | 11 | 13 | 14 | 15 | 16 | 18 | 19 | 21 | 21 | | 25 | NA | 1 | 2 | 2 | 4 | 5 | 6 | 8 | 9 | 10 | 12 | 13 | 14 | 16 | 17 | 18 | 20 | 21 | 21 | | 26 | NA | 1 | 2 | 3 | 4 | 5 | 6 | 8 | 9 | 11 | 12 | 14 | 15 | 16 | 18 | 19 | 21 | 22 | 22 | | 27 | NA | 1 | 2 | 3 | 4 | 5 | 7 | 8 | 10 | 11 | 13 | 14 | 15 | 17 | 18 | 20 | 21 | 23 | 23 | | 28 | NA | 1 | 2 | 3 | 4 | 5 | 7 | 8 | 10 | 12 | 13 | 15 | 16 | 18 | 19 | 21 | 22 | 24 | 24 | | 29 | NA | 1 | 2 | 3 | 4 | 5 | 7 | 9 | 10 | 12 | 13 | 15 | 17 | 18 | 20 | 21 | 23 | 25 | 25 | | 30 | NA | 1 | 2 | 3 | 4 | 5 | 7 | 9 | 11 | 12 | 14 | 16 | 17 | 19 | 20 | 22 | 24 | 26 | 26 | NA: not applicable, meaning LQAS cannot be used in this assessment because the coverage is either too low or too high to assess a supervision area.\textsuperscript{8} Notes: Lightly shaded cells indicate where $\alpha$ or $\beta$ errors are $\leq 10\%$. Darker cells indicate where $\alpha$ or $\beta$ errors are $\leq 15\%$.\textsuperscript{8} We do not recommend using this table for LQAS study design. Except for minor wording changes, this is the same as Table 1 in Valadez and Devkota.\textsuperscript{4} Sample size ($n$) is listed in the leftmost column. Values of $d^*$ are listed in the table. Prevalence thresholds or ‘coverage targets’ are listed in the top row. Note that $(d^*)/n < \text{threshold}$ for every entry in the table. These decision rules implicitly assume that the population proportion exceeds the coverage benchmark and only conclude otherwise if the sample mean is dramatically below the threshold. They result in study designs that are biased toward concluding that an intervention has been successful. For every sample size and threshold in Table 1, the proportion represented by $(d^*)/n$ is smaller than the threshold being tested, so all of the study designs assume that the proportion exceeds the threshold and only conclude otherwise if the sample proportion is much lower than the threshold. These designs set the bar too low! They are biased to conclude that the intervention programme has been successful and will only conclude otherwise in the face of strong evidence. The null hypothesis for the $n=19$, $d^*=13$ rule in Table 1 is that $P_d \geq 80\%$ and the alternative hypothesis is that $P_d < 80\%$. Figure 4 shows the operating characteristic curve for this design. Figure 5 shows that, in this case, a type I error is made when the administrator erroneously concludes that $P_d < 80\%$, and continues to devote resources to the district even though it has reached the 80% threshold. The much more common type II error fails to reject the null... hypothesis and concludes that the district has reached 80% prevalence, when, in fact, $P_d < 80\%$. Having erroneously concluded that the goals have been met, the health administrator might withdraw resources from this district when the population is still struggling to meet the 80% goal. Table 2 summarizes the implications of this concern both in general, and for the vaccination example. Reversing the null hypothesis in this manner goes against longstanding tradition in quality assurance sampling. The use of LQAS in public health is modeled on the work of Dodge and Romig in the manufacturing domain.\textsuperscript{9,10} In manufacturing, batches or ‘lots’ of identical parts are supplied by a part ‘producer’. Each lot should be inspected by the ‘consumer’ of the parts to verify that the quality of the lot is acceptable. If at least $d^*$ out of $n$ sampled parts are within specification, then the lot is accepted by the consumer. If not, then one of several consequences follows: either the lot is rejected outright and sent back to the producer, or in some cases, every piece in the lot is inspected before being used. The first sentence of the introduction in the first edition of Dodge and Romig’s book says ‘It has long been recognized, where sampling instead of complete inspection is used, that certain errors or risks are unavoidable.’\textsuperscript{9} They use the terms ‘consumer’s risk’ and ‘producer’s risk’ to indicate that, by inspecting only a sample rather than every part in every lot, the consumer assumes some risk that they will accept a ‘bad’ lot and the producer assumes some risk of having good lots rejected. In the language of public health, by evaluating a sample of persons rather than evaluating every eligible individual, the public assumes some (consumer’s) risk that the intervention will be declared a success prematurely, and resources will be withdrawn. Likewise, the health ministry assumes some (producer’s) risk that resources will unnecessarily continue to be expended in a region where the programme’s goals have already been met. Dodge and Romig make it clear that the first priority of their inspection method is to protect the consumer. ‘The first requirement for the method will therefore be in the form of a definite assurance against passing any unsatisfactory lot that is submitted for inspection. [...] For the first requirement, there must be specified at the outset a value for the tolerance per cent defective as well as a limit to the chance of accepting any submitted lot of unsatisfactory quality. The latter has, for convenience, been termed the Consumer’s Risk...’\textsuperscript{9} Although the Table 1 study designs look superficially like Dodge and Romig designs, they differ fundamentally from those designs in that they put the first priority on limiting the producer’s risk, rather than that of the vulnerable public.\textsuperscript{11} **Concern 2: The study designs in these training materials purport to have low error rates (this assertion is very likely to be misinterpreted)** The LQAS training materials based on Table 1 claim that their study designs have low error rates, with both $\alpha$ and $\beta$ less than 10% in many cases. We are concerned because the materials provided to the trainees do not clearly define what they mean by ‘error’. Without a clear definition, we feel that it is likely that the trainees will adopt a simple and logical definition of ‘error’: - intuitive type I error: concluding that the district has reached the threshold, when it has not, or - intuitive type II error: concluding that the district has not reached the threshold, when it has. Instead, the definition of ‘error’ that results in $\alpha$ and $\beta$ below 10% is more complicated, and it includes both $P_0$ and $P_2$. For the $n=19$, $d^*=13$ study in the LQAS training materials, the definition of error is something like as given below. - Conclude that the district has reached the threshold of 80% when, in fact, the true proportion lies below 50%. - Conclude that the district has not reached the threshold of 80% when in fact the true proportions lie above 80%. - If the true proportion lies between 50% and 80%, then any conclusion is possible and none are regarded as ‘errors’. We feel that this language is misleading for several reasons. **Concern 2a: Table 1 only lists one threshold, the upper threshold, where the designs control the probability of type I error** Because the reader or trainee is not informed about the other threshold, at which the design controls the Table 2 Comparison of the two approaches to LQAS study designs | | LQAS method with protective null hypothesis (We advocate this type of design) | Method based on Table 1 (We do not advocate this type of design) | |--------------------------------|---------------------------------------------------------------------------------|------------------------------------------------------------------| | | In general | Vaccination example | In general | Vaccination example | | Null hypothesis | $P_d \leq P_0$ | $P_d \leq 50\%$ Vaccination prevalence is low | $P_d \geq P_0$ | $P_d \geq 80\%$ Vaccination prevalence is high | | Alternative hypothesis | $P_d > P_0$ | Vaccination prevalence is high | $P_d < P_0$ | Vaccination prevalence is low | | Conclusion if more than $d^*$ individuals with the condition of interest are found in the random sample of size $n$ | $P_d > P_0$ Reject the null hypothesis and declare intervention success at the $P_d = P_0$ level | Conclude that the intervention has been successful for at least 50% of the district | $P_d \geq P_0$ Accept the null hypothesis & declare intervention success at the $P_d = P_0$ level | Conclude that the intervention has been successful for at least 80% of the district | | Conclusion if fewer than $d^*$ out of $n$ individuals have the condition of interest | Fail to reject the null hypothesis and continue intervention efforts | There is not strong enough evidence to conclude that at least 50% of the district has been vaccinated | $P_d < P_0$ Reject the null hypothesis and continue intervention efforts | There is strong evidence that less than 80% of the district has been vaccinated | | Consequence of a (rare) type I error This will happen with probability $\leq \alpha$ | Declare intervention success prematurely | Possibly withdraw resources prematurely | Fail to recognize intervention success in a timely manner | Leave intervention resources in place longer than necessary to reach the 80% goal | | Consequences of a (common) type II error. The probability of type II error depends on $P_d$. It can be as high as $1-\alpha$ when $P_d$ is near $P_0$ | Fail to recognize intervention success in a timely manner | Leave intervention resources in place longer than necessary to reach the 50% goal | Declare intervention success prematurely | Possibly withdraw resources prematurely | In one method, the population is protected by the null hypothesis and by the relatively common type II errors. In the other method, a common type II error might declare intervention success and withdraw resources prematurely. We advocate the method on the left side of the table. Concern 2c: The trainees probably come away with the sense that small sample studies can be very powerful If we adopt the intuitive definitions of errors, and focus on a single threshold, then we might infer that the study design can reliably discern between situations where $P_d = 79\%$ and $P_d = 81\%$ with only 10% error rates. Such a design is depicted in Figure 6. We feel that it is likely that the trainees come away with the feeling that a study where $n = 19$ and $d^* = 13$ has the type of power that can only be achieved with $n = 2800$ and $d^* = 2240$. Concern 3: The training materials use language that obscures the bias of the null hypothesis If a study finds more than $d^*$ persons with the trait of interest, then the instructions that accompany Table 1 in Valadez and Devkota say that the supervisor should judge the district as having ‘reached the threshold’. In the language of hypothesis testing, this is equivalent to rejecting the null hypothesis. However, the trainee is not told that the probability of rejecting the null hypothesis is $\alpha$, and the probability of making an error is $\beta$. Persons who are quantitatively adept, know that in order to be meaningful, the word ‘optimal’ should be accompanied by a list of objectives and constraints. We fear that trainees who are not quantitatively adept are likely to hear the word ‘optimal’ as ‘optimal for me’. **Recommendations** In light of these concerns, we make the following recommendations for persons designing and reporting LQAS studies, for editors reviewing papers or reports that report LQAS work, and for persons who develop LQAS training materials. 1. We strongly recommend that the null hypothesis should always protect the population at risk. 2. Regardless of the direction of the null hypothesis, LQAS study designs should always be described in a way that clearly states which conclusion requires only weak evidence and which one requires strong evidence. LQAS training materials should make it clear that each district’s LQAS hypothesis test either protects the people, or is biased against them from the start. This is a fundamental property of any hypothesis test that controls the probability of type I error first, and then minimizes the probability of type II error. This inherent feature should be emphasized to LQAS designers and trainees, and we believe this can be accomplished with a simple quotient, and without using statistical jargon. 3. Specifically, compute the quotient $(d^+)/n$ and compare it to $P_0$. If you wish to confidently conclude that $P_d > P_0$, then your test should require a sample proportion that is $> P_0$. Otherwise, the test lacks face validity. To confidently conclude that $P_d > 80\%$, a test should require a sample proportion that is $> 80\%$. 4. If ‘error rates’ are listed, then the term ‘error’ should be defined clearly. 5. LQAS training materials should develop the concept of ‘error’ with the trainees. We suggest that the simple term ‘error’ be reserved for the simple definition that trainees are likely to infer naturally. If we conclude that a district has reached the threshold when it has not, we have made an error. If we conclude that the district has not reached the threshold when it has, we have made an error. Small sample studies will have high probabilities of making type II errors when $P_0 < P_d < P_2$. 6. Trainees and LQAS designers should be made to understand that small sample studies have low power. They will frequently result in classification errors. They can only be ‘optimal’ in a big picture, bureaucratic sense of trading off time and resources and they are blunt. **Concern 4: The use of the word ‘optimal’ reinforces these misunderstandings** Table 1 in Valadez and Devkota uses the word ‘optimal’ in its title. The training materials available on the Internet state that the ‘optimal size for cluster sampling projects = 19’. Appendix 3 of the trainer’s manual circles some study designs and describes them as ‘optimal’. instruments at best for classifying whether or not individual districts have reached a particular threshold. (7) Study designers should understand that if they need more power for decision-making at the district level, then they will need to adopt a design with a larger value of $n$, either in the form of a single-sample design or a double-sample LQAS design that collects more information if the first sample is inconclusive.\(^3\) (8) The word ‘optimal’ should not be used without being clearly defined. **Conclusion** LQAS studies can accomplish important goals at relatively low cost, but as the title of our article states clearly, we urge users of LQAS study designs to beware. In order to be credible, sampling designs must be statistically sound and authors who describe LQAS work should make their assumptions and implications perfectly clear. We are especially concerned that life-giving resources may be prematurely withdrawn from needy populations based on faulty conclusions. We feel strongly that study designers have a responsibility to protect the population at risk. For the sake of those vulnerable populations, we recommend that the existing training materials be thoroughly overhauled, and that authors of LQAS manuscripts and reports be held to a new high standard of clarity and face validity. **Conflict of Interest:** None declared. **References** 1 Robertson SE, Valadez J. Global Review of health care surveys using lot quality assurance sampling (LQAS) 1984–2004. *Soc Sci Med* 2006;63:1648–60. 2 Lemeshow S, Stroh G. Quality assurance sampling for evaluating health parameters in developing countries. *Surv Methodol* 1989;15:71–81. 3 Lemeshow S, Taber S. Lot quality assurance sampling: single- and double sampling plans. *World Health Stat Q* 1991;44:115–32. 4 Valadez JJ, Devkota BR. Decentralized supervision of community health programs: using LQAS in two districts of southern Nepal. In: Rhode J, Wyon J (eds). *Community Based Health Care: Lessons from Bangladesh to Boston*. Boston: Management Sciences for Health, 2002. pp. 160–200. 5 Valadez J, Weiss W, Leburg C, Davis R. *Assessing community health programs: A participant’s manual and workbook: using LQAS for baseline surveys and regular monitoring*. London: Teaching Aids at Low Cost (TALC), 2003. 6 CORE Monitoring and Evaluation Workgroup LQAS Online Series. 2006. Available at: http://www.coregroup.org/conf_reg/lqas_series.cfm, (Accessed 2 January 2009). 7 LQAS Lecture #1. 2006. Available at: http://www.coregroup.org/conf_reg/LQAS_Lecture_1.pdf. (Accessed 2 January 2009). 8 Valadez JJ, Weiss W, Leburg C, Davis R. *Assessing Community Health Programs: A Participant’s Manual and Workbook: Using LQAS for Baseline Surveys and Regular Monitoring*. Monograph on the Internet. 2002. Available at: http://www.coregroup.org/working groups/LQAS_Participant_Manual_L.pdf. (Accessed 2 January 2009). 9 Dodge HF, Romig HG. *Sampling Inspection Tables*. New York: John Wiley & Sons, 1944. 10 Dodge HF, Romig HG. *Sampling Inspection Tables*. 2nd edn. New York: John Wiley & Sons, 1959. 11 Fitch DJ. Is the Valadez evaluation method based on Dodge and Romig? *Proceedings of the International Statistical Institute*. Portugal, Lisboa: International Statistical Institute, 2007. 12 Valadez J, Weiss W, Leburg C, Davis R. *Assessing community health programs: A Trainer’s Guide: Using LQAS for Baseline Surveys and Regular Monitoring*. Monograph on Internet. London: Teaching Aids at Low Cost (TALC); 2003. Available at: http://www.coregroup.org/working_groups/lqas_train.html (Accessed 2 January 2009).